Cant access deployed contract with web3 after restarting local server - ethereum

I made an app that interacts with a smart contract in the local ganache-cli server, with everything working fine, after restarting the server (and deploying againg the contracts) the app seems to not find the contracts. This is the error that I receive:
Error: UserController has not been deployed to detected network (network/artifact mismatch)
I've deployed the contracts and restarted the server multiple times but nothing seems to works, also MetaStack is able to interact with the server.

Try using the --reset --all flags which tends to resolve this issue.

Related

Openshift OKD 4.5 on VMware

I am getting the connection time out when running the command in bootstrap.
Any configuration suggestions on networking part if I am missing
It’s says kubernetes api calling time out
This is obviously very hard to debug without having access to your environment. Some tips to debug the OKD installation:
Before starting the installation, make sure your environment meets all the prerequisites. Often, the problem lies with a faulty DNS / DHCP / networking setup. Potentially deploy a separate VM into the network to check if everything works as expected.
The bootstrap node and the Master Nodes are deployed with the SSH key you specify, so in vCenter, get the IP of the machines that are already deployed and use SSH to connect to them. Once on the machine, use sudo crictl ps and sudo crictl logs <container-id> to review the logs for the running containers, focussing on the components:
kube-apiserver
etcd
machine-controller
In your case, the API is not coming up, so reviewing the logs of the above components will likely show the root cause.

Fiware IDM server issue

I am using Fiware IDM version 6.2 and i have issues with keystone server (running on port 5000)..
Keystone is working fine until server is not used for some amount of time (around 1 hour) after that first call that arrive ( in my case from pep-proxy checking auth token) it simply enter into unresponsive mode, meaning it does not send anything back. When i cancel request and send it again it start working normal.
I would like to know if there is something on my part that i missed or failed to check.
I am using docker to run Fiware IDm enviroment.
Picture of logs
You are using an old version both Keyrock and Wilma. Currently, both of them are in version 7.5.1. Please take a look on Hub Docker (https://hub.docker.com/u/fiware). Nevertheless, the issue that you mention is due to security management. The admin tokens expire after 1h, therefore you need to obtain a new one to continue working with it.

Node.js Server become unresponsive after certain time period

I've recently been having problems with my server which become unresponsive after certain period of time.
Basically after a certain amount of usage & time my node.js app stops responding to requests. I don't even see routes being fired on my console and the HTTP calls from my client (Android app) don't reach the server anymore. But after restart my node.js app server everything starts working again, until things inevitable stop again. The app never crashes, it just stops responding to requests.
I'm not getting any errors, and I've made sure to handle and log all DB connection errors so I'm not sure where to start.
Any clue as to what might be happening and how I can solve this problem?
Here's my stack:
Node.js on Digital Ocean server with Ubutnu 14.04 and Nginx (using Express 4.15.2 + PM2 2.4.6)
Database running MySQL (using node-mysql)

Intermittent Errors connecting to MySQL server

My app seems to run fine sometimes, and other times it says it cannot connect to any of the MySQL servers.
I started out with the MySQL server being hosted in azure as well, but I moved it external due to connectivity issues.
I finally moved the ASP.Net app to a real VM instead of being hosted as a website. I tested a manual connection to the MySQL server when it became unresponsive and it failed as well. I then did a trace route to the server and it failed as well.
Is this a known issue? Is this a duplicate of: Classic ASP site on Azure web site, remote mysql database

Send mail task failure with error due to SMTP Connection

I've created an SSIS package which runs perfect when scheduled as a job.Now I've have a requirement that a mail ought to be sent every time it runs stating if the package was successfully completed or failed.
I've created an SMTP Connection with server name as mx.xxxxxxxx(organization).I've neither checked windows authentication or Enable Secure Socket Layer Options(as suggested in various blogs).
The Job runs fine and sends mail when run manually but is failing when scheduled as a job.
I've tried running it by editing the command line as suggested by many but with no success.
Can you please suggest where I might be going wrong,
Below is the error:
Argument "SMTP" for option "connection" is not valid. The command line parameters are invalid. The step failed.
Since it fails when you run it from your production server, regardless of whether you run it manually or from the job, it is probably related either to connectivity (can your production server connect to the smtp server), or it could be permissions related, if you are using some kind of proxy account on the server that is different from the one you use in your local BIDS.