Is there a way to programmatically revoke all user-generated tokens for a service user?
I saw some endpoints in the Multipass internal API, but from my understanding those are locked down for FE use. This is for killing off restricted tokens in the break-the glass-case, or when we kill off a connection to a remote system.
If this is just for one-off break-the-glass scenario, feel free to use internal APIs. You probably want the getTokens and revokeToken endpoints.
Reminder: Restricted tokens are not user-generated tokens. And restricted tokens should have very short lifetimes (ideally 1 hour or less) so revoking them shouldn’t really be a concern.
Related
There is some hype about DeFi and it goes basically to Ethereum
(I have not seen yet other non-Ethereum blockchain that prmote DeFi term usage).
Then there is MetaMask that is essential a wallet distributed as Chrome browser plugin.
But some blockchain site specifically require MetaMask and establish some communication between.
I know Ethereum, but it is blockchain and basically backend technology.
I think is has nothing to do with browsers and websites.
What exactly (technically speaking) is Ethereum blockchain enabled website?
Or other way round, how exactly MetaMask is to interact with website visited?
How websites interact with the MetaMask extension
Metamask extension injects the ethereum property into the JS window object. This property links to the JS API of the Metamask extension, allowing the website some level of control - such as "open a window requesting the user to submit this transaction" (but not "get the private key" for example).
This example JS code opens the Metamask window and asks the user for permission to share their (public) addresses with the website, when the myBtn is clicked. The shared addresses are then saved into the accounts variable.
$('#myBtn').click(async (e) => {
let accounts = await window.ethereum.request({
'method': 'eth_requestAccounts'
});
}
You can find more info at https://docs.metamask.io/guide/getting-started.html#getting-started
Basically in a decentralised application (DApp) the HTML frontend directly interacts with the blockchain without going through a web server. This is done with a wallet, existing independently from the DApp, confirms all the transactions. Any sent transaction goes directly from the frontend to the Ethereum blockchain through a JSON-RPC API node (see link for the request round trip description).
The main differences to the centralised web applications using server-side backend
The backend developer cannot break the terms of the (smart) contract, e.g. steal users money into his own pocket. This is called non-custodial model and it mitigates counterparty risk.
Backend cannot pull money or make user to do something they cannot accept, because wallet confirms all the transaction. Users, or their sophisticated representatives, can double check all smart contracts the wallet is going to interact on a blockchain.
Blockchain never goes down, unlike centralised services, because it is highly distributed (10,000 nodes)
User pays for all the transaction themselves using ETH as the currency for the transaction fees.
Note that the model is not exclusive to Ethereum, but also used by many other blockchains. Live DeFi applications can be found e.g. on EOS, Solana and NEAR blockchains and many Ethereum Virtual Machine compatible chains like Polygon, Avalanche and Binance Smart Chain.
Note that currently most users still need to trust the HTML code downloaded from some centralised web server. We have seen e.g. DNS takeover attacks in the past. However, this still greatly reduces the risk, as any "sign-in" to a web application does not automatically put the user in risk, as wallets still need to confirm any transaction.
Also note that blockchain makes little sense for applications that do not involve financial assets or other assets with value, like NFTs, because the main use case of a blockchain is to solve financial sovereignty and eliminate counterparty risk. This tradeoff comes with high transaction costs and the need of some sort of cryptocurrency.
I am exploring the use of GitHub Container Registry (ghcr) for storing docker images built in my Continuous Integration (CI) pipelines that are built in GitHub Actions.
I am reading Migrating to GitHub Container Registry for Docker images which states:
Add your new container registry authentication personal access token (PAT) as a GitHub Actions secret. GitHub Container Registry does not support using GITHUB_TOKEN for your PAT so you must use a different custom variable, such as CR_PAT.
Earlier in that article though there is a link to Security hardening for GitHub Actions
which states:
You should never use personal access tokens from your own account. These tokens grant access to all repositories within the organizations that you have access to, as well as all personal repositories in your user account. This indirectly grants broad access to all write-access users of the repository the workflow is in. In addition, if you later leave an organization, workflows using this token will immediately break, and debugging this issue can be challenging.
If a personal access token is used, it should be one that was generated for a new account that is only granted access to the specific repositories that are needed for the workflow. Note that this approach is not scalable and should be avoided in favor of alternatives, such as deploy keys.
Those quotes seem contradictory. The first is telling me to use PATs to authenticate to ghcr in my CI pipelines, the other seems to be telling me that I shouldn't.
Am I correct that they are contradictory or have I misunderstood?
What is the correct course of action for authenticating to ghcr from a GitHub Action workflow?
Its kind of a hybrid of both statements.
You need to use a personal access token (PAT), and GitHub Container Registry does not support using GITHUB_TOKEN for your PAT so you must use a different custom variable, such as CR_PAT.
You can create a new GitHub account that will be used exclusively for non-human automation such as a CI/CD.
Since this GitHub account won’t be used by a human, it’s called a machine user and permitted under GitHub’s terms of service.
You should then give this machine user account the minimum number of privileges needed to do it job.
If you used a PAT belonging to your human personal account rather than belonging to separate least privilege machine account there is a risk that the token cold be obtained by others and hence your personals account and all its permissions is compromised (very bad!). Its a bigger risk likelihood when working in an organization with multiple colleagues/collaborators.
From: https://github.community/t/should-i-use-a-personal-access-token-for-accessing-ghcr-from-github-actions/165381
We wish to decouple the systems between 2 separate organizations (as an example: one could be a set of in house applications and the other a set of 3rd party applications). Although we could do this using REST based APIs, we wish to achieve things like temporal decoupling, scalability, reliable and durable communication, workload decoupling (through fan-out), etc. And it is for these reasons, we wish to use a message bus.
Now one could use Amazon's SNS and SQS as the message bus infrastructure, where our org would have an SNS instance which would publish to the 3rd party SQS instance. Similarly, for messages the 3rd party wished to send to us, they would post to their SNS instance, which would then publish to our SQS instance. See: Cross Account Integration with Amazon SNS
I was thinking of how one would achieve this sort of cross-organization integration using Azure Service Bus (ASB), as we are heavily invested in Azure. But, ASB doesnt have the ability to publish from one instance to another instance belonging to a different organization (or even to another instance in the same organization, not yet at least). Given this limitation, the plan is that we would give the 3rd party vendor separate sets of connections strings that would allow them to listen and process messages that we posted and also a separate set of connection strings that would let them post messages to a topic which we could then subscribe to and process.
My question is: Is this a good idea? Or would this be considered an anti-pattern? My biggest concern is the fact, that while the point of using a message bus was to achieve decoupling, the infrastructure piece of ASB is making us tightly coupled to the point that we need to communicate between the 2 organizations on not just the endpoints, but also, how the queue/topic was setup (session, or no session, duplicate detection etc) and the consumer is tightly coupled to how the sender sends messages (what was used as the session id, message id, etc).
Is this a valid concern?
Have you done this?
What other issues might I run into?
Using Azure Service Bus connections string with different Shared Access Policy for senders and receivers (Send and Listen) is intended to be used by senders and receivers with limitted permissions. Just like you intend to use it.
My biggest concern is the fact, that while the point of using a message bus was to achieve decoupling, the infrastructure piece of ASB is making us tightly coupled to the point that we need to communicate between the 2 organizations on not just the endpoints, but also, how the queue/topic was setup (session, or no session, duplicate detection etc) and the consumer is tightly coupled to how the sender sends messages (what was used as the session id, message id, etc).
The coupling always exists. You're coupled to the language you're using. Datastore technology used to persist your data. Cloud vendor you're using. This is not type of coupling I'd be worried, unless you're planning to change those on a monthly basis.
Not more specific to the communication patterns. Sessions would be a business requriement and not a coupling. If you required ordered message delivery, then what else would you do? On Amazon you'd be also "coupling" to FIFO queue to achieve order. Message ID is by no means coupling either. It's an attribute on a message. If receiver chooses to ignore it, they can. Yes, you're coupling to use BrokeredMessage/Message envelope and serialization, but how else would you send and receive messages? This, is more of a contract for partied to agree upon.
One name for the pattern for connecting service buses between organizations is "Shovel" (that's what they are called in RabbitMq)
Sometimes it is necessary to reliably and continually move messages
from a source (e.g. a queue) in one broker to a destination in another
broker (e.g. an exchange). The Shovel plugin allows you to configure a
number of shovels, which do just that and start automatically when the
broker starts.
In the case of Azure, one way to achieve "shovels" is by using logic apps, as they provide the ability to connect to ASB entities in different namespaces.
See:
What are Logic Apps
Service Bus Connectors
Video: Use Azure Enterprise Integration Services to run cloud apps at scale
I am developing an app in C++ which will run in a Windows desktop environment This will be distributed to a number of customers and I need to store log files from the customers in a central location where I can access them. Is Google drive a suitable platform for this and if so what is the best approach? Should I be looking at an Application Owned Account for example? Also I am concerned by the paragraph in the Google documentation:
"Note that there are limits on the number of refresh tokens that will be issued; one limit per client/user combination, and another per user across all clients. You should save refresh tokens in long-term storage and continue to use them as long as they remain valid. If your application requests too many refresh tokens, it may run into these limits, in which case older refresh tokens will stop working"
How long does a token remain valid for and what are the limits on refresh tokens?
Best regards
Trevor
You could use google drive for this, its not a bad idea really. But since its only log files you are storing and they are all owned by the application your not going to be needing to bother with user Oauth2. You could do it with a google drive service account. This way the application owns the account and they are just uploading data to it.
https://developers.google.com/drive/service-accounts
I am currently working on a REST/JSON API that has to provide some services through remote websites. I do not know the end-customers of these websites and they would/should not have an account on the API server. The only accounts existent on the API server would be the accounts identifying the websites. Since this is all RESTful and therefore all communication would be between end-user browser (through javascript/JSON) and my REST API service, how can I make sure that the system won't be abused by 3rd parties interested in increasing the middleman's bill? (where the middleman is the owner of the website reselling my services). What authentication methods would you recommend that would work and would prevent users from just taking the js code from the website and call it 1000000 times just to bankrupt the website owner? I was thinking of using the HTTP_REFERER , and translate that to IP address (to find out which server is hosting the code, and authenticate based on this IP), but I presume the HTTP_REFERER can easily be spoofed. I'm not looking for my customer's end customers to register on the API server, this would defeat the purpose of this API.
Some ideas please?
Thanks,
Dan
This might not be an option for you, but what I've done before in this case is to make a proxy on top of the REST calls. The website calls its own internal service and then that service calls your REST calls. The advantage is that, like you said, no one can hit your REST calls directly or try to spoof calls.
Failing that, you could implement an authentication scheme like HMAC (http://en.wikipedia.org/wiki/Hash-based_message_authentication_code). I've seen a lot of APIs use this.
Using HMAC-SHA1 for API authentication - how to store the client password securely?
Here is what Java code might look like to authenticate: http://support.ooyala.com/developers/documentation/api/signature_java.html
Either way I think you'll have to do some work server side. Otherwise people might be able to reverse engineer the API if everything is purely client side.