Where to Broadcast Bitcoin Cash Transaction? - json

Using JSON HTTP POST; what online service do I best use to broadcast a Bitcoin Cash transaction?
I'm looking for the equivalent of https://blockchain.info/pushtx

There are a few options to broadcast a transaction for Bitcoin and Bitcoin-Cash. The first, but also the most expensive in terms of time, is to setup a BitcoinABC node on your machine and let it sync. Once that's done you can simply call the sendrawtransaction API call and it'll get pushed to other nodes in the network.
The second option is to use Wladimir's bitcoin-submittx tool to connect to a number of nodes and submit the transaction to them. This tool was originally written for Bitcoin, but works also for Bitcoin-Cash. It requires a number of node addresses, but you can use the DNS seeds to get some:
python2 bitcoin-submittx mainnet ${TXHEX} $(dig seed-abc.bitcoinforks.org)
This should submit the TX to some random nodes in the network.

https://rest.bitcoin.com provides a REST API for broadcasting transactions. This BITBOX code example shows how to construct a BCH transaction then broadcast it using rest.bitcoin.com:
https://github.com/Bitcoin-com/bitbox-javascript-sdk/blob/master/examples/applications/wallet/send-bch/send-bch.js
In particular, look at the last few lines of the example:
// Broadcast transation to the network
const broadcast = await BITBOX.RawTransactions.sendRawTransaction(hex)
console.log(`Transaction ID: ${broadcast}`)
API documentation:
https://developer.bitcoin.com/bitbox/docs/getting-started
If you already have the raw hex and you want a manual way to broadcast it, you can go directly to the endpoint on rest.bitcoin.com, and paste in the hex:
https://rest.bitcoin.com/#/rawtransactions/sendRawTransaction

Related

How to detect contract creation timestamp in Ethereum blockchain nodes?

I am trying to detect timestamp in Ethereum node but there are no methods/functions.
How do we find this?
I am getting swap logs and parsing them but couldn't find direct way to detect contract creation date.
Contract creation date is not available using a direct method.
Blockchain explorers usually collect it the following way:
They scan every transaction on every block on the chain
If the transaction receipt contains non-null contractAddress property, it means that this transaction deployed a contract.
Time of the transaction (and effectively time of the contract deployment) is available as part of the block in which the tx was produced
This is quite complicated. There is no method to get the Transaction that deployed the Contract.
First of all, there are multiple ways to deploy a contract:
Via the Deployment Transaction, here you will get the contractAddress in the tx receipt.
Via another contract, with create2/create method. For example in this way Uniswap: Factory Contract creates Pair contracts. And there is no API in RPC to find out this type of deployment. You need a node that can return such "Internal Calls".
There are some workarounds. In your case, do you just need the timestamp? Then you have just to find the block when the contract was deployed.
The archive node.
Use binary search to find the block number, when eth_getCode returns not the empty data, and the previous one is empty.
Get the timestamp of the block.
If you need the transaction:
Use the previous method to find the block
Load transactions and their receipts.
Search for a Deployment Transaction, which has your contractAddress in the receipt.
If not found, almost always the contract emits an Event when creating another contract. So look into the address, data and topics to match your contract address of every log entry.
Not a universal workaround: if you know the Event which gets emitted in the constructor or by the parent's contract, you can search for it with getPastLogs.
You can use the 3rd party API, like https://docs.etherscan.io/api-endpoints/contracts to get the Get Contract Creator and Creation Tx Hash. After that:
Get the blockNumber from transaction fetched by txHash.
Get the timestamp from block fetched by blockNumber.

Is there any way to stop a particular CAN message coming from another real ECU to DUT (real ECU) on the bus through CAPL scripting?

I am trying to create a CAPL script (test module) to automate my testcases. In the system (test setup) we have all the real ECU's connected with Device/ECU under test. I have came across this function ILDisableMsg(messageName)/testDisableMsg(msgId) in CAPL which will block/stop a particular message from simulated node(IL node). Similarly, is there any way to block/stop a particular message from real ECU to DUT receiving it through CAPL script?
For your case, the correct way is to stop the physical connection with the real ECU. You could use:
TestSetEcuOffline: Used to ECU Disconnect from the bus
TestSetEcuOnline : Used to ECU Connect to the bus
But, this will stop the whole communication with the real CAN bus channel.
To avoid such situation, try to stop sending internal signals that map your can massage from the SW side.
Don´t forget also to either stop IL block or remove the simulated massage from the configuration!

unable to get status code of oci-cli command

i need to get the response code to use in scripts
like i run a command
oci compute instance update --instance-id ocid.of.instance --shape-config '{"OCPU":"2"}' --force
i will get this message
ServiceError:
{
"code": "InternalError",
"message": "Out of host capacity.",
"opc-request-id": "3FF4337F4ECE43BBB4B8E52524E80247/37CB970D371A9C6BB01DFB23E754FE5B/18DFE9AE75B88A77AB3A1FBEBD3B191B",
"status": 500
}
in this case, i got the error message and a status code 500
but if the commond works, it will output a full json of my instance's parameters, and i can only see a line of response code 200 in debug mode
is there a way to only show the response code?
Currently OCI CLI does not provide the HTTP response code directly in the response. The response would either contain the service response in case of success or a service error message in case of error.
Can you explain how you are using the HTTP response code in your script? Could you not use the command error code (non-zero on error) to determine the error case?
The ERROR: "Out of host capacity" means The selected shape does not have any available servers in the selected region and Availability Domain (AD). Virtual Machines (VM) are dynamically provisioned. If an AD has reached a minimum threshold, new hypervisors (physical servers) will be automatically provisioned.
There may be some occasions where the additional capacity has not finished provisioning before the existing capacity is exhausted, but when retrying in 15 minutes the customer may find the shape they want is available.
Alternatively, selecting a different shape, AD or region will almost certainly have the capacity needed.
Bare metal instances: Host capacity is ordered on a proactive basis guided by the growth rate of a region. Specialized shapes such as DenseIO do not have as much spare overhead and may be more likely to run out of capacity. Customers may need to try another AD or region.

When we call a solidity function via web3js , how does the code flows along with data formats during all the process

When we call a solidity function via web3js, how does the code flows along with data formats during all the process?
For example, if I call a solidity function through web3js, how does it get executed. Can anyone explain the complete flow?
First of all, I recommend taking the time to read How does Ethereum work, anyway?
But for now, a short explanation
When you call a method on a contract through web3.js, the library
will encode your method call as data attribute on the transaction.
Here's a good explanation about ethereum transactions and data
attribute
The ethereum node your web3.js is connected to will receive your transactions and do some basic checks of nonce and balance
Once the basic checks pass, the node will broadcast the transaction to the rest of the network
When a network node receives a transaction with data attribute, it will execute the transaction using the Ethereum EVM. The outcome
of the transaction is modified state of the contract storage. More
about contract storage
The expectation is that the transaction will produce the same state change on every single node in the network. This is how
consensus is reached and the transaction (and the contract state
change) become part of the canonical chain (mined and not belonging
to an uncle block)

Storing data in FIWARE Object Storage

I'm building an application that stores files into the FIWARE Object Storage. I don't quite understand what is the correct way of storing files into the storage.
The code python code snippet below taken from the Object Storage - User and Programmers Guide shows 2 ways of doing it:
def store_text(token, auth, container_name, object_name, object_text):
headers = {"X-Auth-Token": token}
# 1. version
#body = '{"mimetype":"text/plain", "metadata":{}, "value" : "' + object_text + '"}'
# 2. version
body = object_text
url = auth + "/" + container_name + "/" + object_name
return swift_request('PUT', url, headers, body)
The 1. version confuses me, because when I first looked at the only Node.js module (repo: fiware-object-storage) that works with Object Storage, it seemed to use 1. version. As the module was making calls to the old (v.1.1) API version instead of the presumably newest (v.2.0), referencing to the python example, not sure if that is an outdated version of doing it or not.
As I played more with the module, realised it didn't work and the code for it was a total mess. So I forked the project and quickly understood that I will need rewrite it form the ground up, taking the above mention python example from the usage guide as an reference. Link to my repo.
As of writing this the only methods that aren't implement is the object storage (PUT) and object fetching (GET).
Had some addition questions about the Object Storage which I sent to fiware-lab-help#lists.fiware.org, but haven't heard anything back so asking them here.
Haven't got much experience with writing API libraries. Should I need to worry about auth token expiring? I presume it is not needed to make a new authentication, every time we interact with storage. The authentication should happen once when server is starting-up (we create a instance) and it internally keeps it. Should I implement some kind of mechanism that refreshes the token?
Does the tenant id change? From the quote below is presume that getting a tenant I just a one time deal, then later you can use it in the config to make less authentication calls.
A valid token is required to access an object store. This section
describes how to get a valid token assuming an identity management
system compatible with OpenStack Keystone is being used. If the
username, password and tenant details are known, only step 3 is
required. source
During the authentication when fetching tenants how should I select the "right" one? For now i'm just taking the first one similar as the example code does.
Is it true that a object storage container belongs to only a single region?
Use only what you call version 2. Ignore your version 1. It is commented out in the example. It should be removed from the documentation.
(1) The token will be valid for some period of time. This could be an hour or a day, depending on the setup. This period of time should be specified in the token that is returned by the authentication service. The token needs to be periodically refreshed.
(2) The tenant id does not change.
(3) Typically only one tenant id is returned. It is possible, however, that you were assigned more than one id, in which case you have to pick which one you are currently using. Containers typically belong to a single tenant and are not shared between tenants.
(4) Containers are typically limited to a single region. This may change in the future when multi-region support for a container is added to Swift.
Solved my troubles and created the NPM module that works with the FIWARE Object Storage: https://github.com/renarsvilnis/fiware-object-storage-ge