I am setting up a private ethereum testnet using geth.
this is the genesis file.my question is when i create new accounts using personal.newAccount() command do i have to replace these addresses with the new ones ? and initialize the json file again?
i have already tried running this file and the mining starts but the account balance does not go up.
{
"config": {
"chainId": 1994,
"homesteadBlock": 0,
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0
},
"difficulty": "400",
"gasLimit": "2000000",
"alloc": {
"7b684d27167d208c66584ece7f09d8bc8f86ffff": {
"balance": "100000000000000000000000"
},
"ae13d41d66af28380c7af6d825ab557eb271ffff": {
"balance": "120000000000000000000000"
}
}
}
The mining thread gets killed and connection from geth java console times out.
By defining these addresses in the genesis file, the addresses are not generated as accounts, only pre-funded. I would recommend you to generate 1-2 accounts and use these account addresses as the pre-funded addresses.
Regarding this: "i have already tried running this file and the mining starts but the account balance does not go up."
It doesn't increase because no mining address/account is specified.
In your case, after starting miner.start(), the mining process is killed, because no etherbase/coinbase account is specified. Normally the etherbase/coinbase will be assigned to the first account. If you don't want to create an account, an assignment of the etherbase is necessary.
I assume that something like this is thrown:
Error: etherbase missing: etherbase address must be explicitly specified
You can set the address via this command:
miner.setEtherbase("7b684d27167d208c66584ece7f09d8bc8f86ffff")
All in all you have two options:
Create an account and use this as your etherbase
Assign the etherbase to your prefunded address
By creating an account, the etherbase will be automatically assigned to this account.
I hope, I could help you.
Related
I'm setting up an InnoDB Cluster using mysqlsh. This is in Kubernetes, but I think this question applies more generally.
When I use cluster.configureInstance() I see messages that includes:
This instance reports its own address as node-2:3306
However, the nodes can only find each other through DNS at an address like node-2.cluster:3306. The problem comes when adding instances to the cluster; they try to find the other nodes without the qualified name. Errors are of the form:
[GCS] Error on opening a connection to peer node node-0:33061 when joining a group. My local port is: 33061.
It is using node-n:33061 rather than node-n.cluster:33061.
If it matters, the "DNS" is set up as a headless service in Kubernetes that provides consistent addresses as pods come and go. It's very simple, and I named it "cluster" to created addresses of the form node-n.cluster. I don't want to cloud this question with detail I don't think matters, however, as surely other configurations require the instances in the cluster to use DNS as well.
I thought that setting localAddress when creating the cluster and adding the nodes would solve the problem. Indeed, after I added that to the createCluster options, I can look in the database and see
| group_replication_local_address | node-0.cluster:33061 |
After I create the cluster and look at the topology, it seems that the local address setting has no effect whatsoever:
{
"clusterName": "mycluster",
"defaultReplicaSet": {
"name": "default",
"primary": "node-0:3306",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"node-0:3306": {
"address": "node-0:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.29"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "node-0:3306"
}
And adding more instances continues to fail with the same communication errors.
How do I convince each instance that the address it needs to advertise is different? I will try other permutations of the localAddress setting, but it doesn't look like it's intended to fix the problem I'm having. How do I reconcile the address the instance reports for itself with the address that's actually useful for other instances to find it?
Edit to add: Maybe it is a Kubernetes thing? Or a Docker thing at any rate. There is an environment variable set in the container:
HOSTNAME=node-0
Does the containerized MySQL use that? If so, how do I override it?
Apparently this value has to be set at startup. The option for my setup was
--report-host=${HOSTNAME}.cluster
when starting the MySQL instances resolved the issue.
Specifically for Kubernetes, an example is at https://github.com/adamelliotfields/kubernetes/blob/master/mysql/mysql.yaml
recently I started learning/working with ARM templates and JSON so I'm a complete newbie to this. I've been asked to make a template that creates a virtual machine selecting an existing virtual network and subnet within a subscription.
Everything works fine, except that whenever I make the deployment, the template creates a new vnet and subnet with randomized names instead of letting me pick from an existing one (the VM creates correctly though).
I used https://github.com/Azure/azure-quickstart-templates/blob/master/101-vm-simple-rhel/azuredeploy.json quickstart template as a base and added a few lines (which I will put below) to let me type the name of my vnet and subnet as it does with the VM name, but it keeps creating new ones even though I type the name correctly.
The lines I added to the code in the Parameters section are:
"virtualNetworkName": {
"type": "string",
"metadata": {
"description": "VNet to which the VM will connect."
}
},
"subnetName": {
"type": "string",
"metadata": {
"description": "Subnet to which the VM will connect."
}
}
Thank you in advance for your time!
To create a VM with the existing VNet base on the quickstart template you used, you only need to delete the virtual network resource in the resources block and the dependency on it and all the variables about the VNet and subnet except the variable subnetRef, then change this variable with your parameters like this if the VNet in the same resource group with the VM:
"subnetRef": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('virtualNetworkName'), parameters('subnetName'))]",
If the existing VNet in another resource group but in the same subscription, then the variable subnetRef should be changed like this:
"subnetRef": "[resourceId('otherResourceGroup', 'Microsoft.Network/virtualNetworks/subnets', parameters('virtualNetworkName'), parameters('subnetName'))]",
According to the changes, the template will use the existing VNet and subnet instead of creating new ones.
Take a look at this sample:
https://github.com/Azure/azure-quickstart-templates/tree/master/100-marketplace-sample
It shows how you can use a pattern for new/existing/none on resources in a template.
I have setup sensu-server and client successfully and all is working except one thing . in this image
you can see that there are alerts for mysql and web ports.but I have given only "mysql" subscription right now in my client.json file in my client system. I have removed the "webserver" subscription from client.json (which I added initially before replacing it with "mysql" ) but still the checks associated with the "webserver" subscription are displayed. why is this? and how to display only the checks associated with the given subcription. here is my client.json
{
"client": {
"name": "sensuclient2",
"address": "127.0.0.1",
"keepalive": {
"thresholds": {
"warning": 60,
"critical": 120
},
"handlers": ["default", "mailer", "sns"]
},
"subscriptions": [
"mysql"
]
}
}
It's possible Uchiwa is showing older checks, prior to the change you made to your client configuration file (at least I went through that once!). Try deleting the events. If the API is not running the checks anymore, the events won't come up again.
You can either use sensu-cli to delete the events:
sensu-cli event delete sensuclient2 check_http
https://github.com/agent462/sensu-cli
Or make an API call...
curl -s -i -X DELETE http://yourhost:yourport/events/sensuclient2/check_http
https://sensuapp.org/docs/1.1/api/events-api.html#eventsclientcheck-delete
If the checks do come back you should check both server and and client side checks and client configuration.
Also, the simplest is the best, #vishal.k himself reminded me:
you can always delete the events using Uchiwa's interface. :)
TLTR
How to migrate the pre 0.8 ACL permissions to 0.7.3?
Current setup
I am currently running an ACL enabled Consul 0.7.3 stack.
With Consul 0.8 ACLs will finally also include services and nodes, so that nodes / service (Consul) are not longer shown to anonymous users. This is exactly what I need. Today I tried to enable the new ACL "pre 0.8" using https://www.consul.io/docs/agent/options.html#acl_enforce_version_8
After doing so, my nodes could no longer authenticate against the master ( if authentication is the problem at all ).
I run the consul-network with gossip enabled, I have configured a acl_master_token:
"{acl_master_token":"<token>}"
and a token for the agents:
"{acl_token":"<token>}"
which all agents use / are configured with.
I have these ACL defaults:
{
"acl_datacenter": "stable",
"acl_default_policy": "deny",
"acl_down_policy": "deny"
}
and my Consul config looks like this:
{
"datacenter": "stable",
"data_dir": "/consul/data",
"ui": true,
"dns_config": {
"allow_stale": false
},
"log_level": "INFO",
"node_name": "dwconsul",
"client_addr" : "0.0.0.0",
"server": true,
"bootstrap": true,
"acl_enforce_version_8": true
}
What happens
When I boot, I cannot see my nodes/services using my token at all, neither the nodes/agents can register at the master,
Question
What is exactly needed to get the following:
All agents can see all nodes and all services and all KVs
Anonymous sees nothing, not KV, services or nodes (thats what is possible with 0.8 )
I looked at https://www.consul.io/docs/internals/acl.html "ACL Changes Coming in Consul 0.8" but I could not wrap my head around it. Should I now use https://www.consul.io/docs/agent/options.html#acl_agent_master_token instead of acl_token?
Thank you for any help. I guess I will not be the only one on this migration path and this particular interest, a lot of people are interested in this. You help all of them :)
It looks like the new node policy is preventing the nodes from registering properly. This should fix things:
On your Consul servers configure them with an acl_agent_token that has a policy that can write to any node, like this: node "" { policy = "write" }.
On your Consul agents, configure them with a similar one to the servers to keep things open, or you can give them a token with a more specific policy that only lets them write to some allowed prefix.
Note this gets set as the acl_agent_token which is used for internal registration operations. The acl_agent_master_token is used as kind of an emergency token to use the /v1/agent APIs if there's something wrong with the Consul servers, but it only applies to the /v1/agent APIs.
For "all agents can see all nodes and all services and all KVs" you'd add node read privileges to whatever token you are giving to your agents via the acl_token, so you'd add a policy like:
node "" { policy = "read" }
service "" { policy = "read" }
key "" { policy = "read" }
Note that this allows anyone with access to the agent's client interface to read all these things, so you want to be careful with what you bind to (usually only loopback). Or don't set acl_token at all and make callers pass in a token with each request.
I have more than enough CPUs and memory to launch 100 instances but only 30 external IP addresses is there a way I can launch more instances despite that?
Chances are you don't need that many IPs at all. Only in very specific scenarios you would need all your nodes to be publicly accessible.
If you need that many instances, simply create them without public IPs. Then, create a NAT Gateway so your instances can use that to access outside your private network.
You will be able to accomplish 99% of usage scenarios this way. If you really need more IPs and you have used all of your ephemeral IPs you can request in the Form.
I guess it depends what you want to do, but the gcloud compute instances create tool has a flag --no-address which will allow you to launch an instance with no external IP address. Have a look at gcloud compute instances create --help to see if you think that would be useful.
If you wanted to use the API or instance templates, I think just leaving out the accessConfigs part of the networking section of the request body will do what you need. Compare this:
"networkInterfaces": [
{
"network": "https://www.googleapis.com/compute/v1/projects/your-project-here/global/networks/default",
"accessConfigs": [
{
"name": "External NAT",
"type": "ONE_TO_ONE_NAT"
}
]
}
Where I used the default option of "Ephemeral" for the external IP in the Google Cloud Developers console, with this:
"networkInterfaces": [
{
"network": "https://www.googleapis.com/compute/v1/projects/your-project-here/global/networks/default"
}
]
Where I selected "None" as the External IP.
To look at what the API body would look like, there is a link "View Equivalent REST" just below the Create button, it can be really useful for templates and things.