I am using docker-java API to execute docker API in my project. I didn't find any suitable method which lists down docker CPU memory usage as
GET /v1.24/containers/redis1/stats HTTP/1.1 with the help of docker-java API
Dependency
compile group: 'com.github.docker-java', name: 'docker-java', version: '3.1.2'
Code
public static void execute() {
DockerClient dockerClient = DockerClientBuilder.getInstance().build();
dockerClient.statsCmd("containerName");
}
I didn't get any output
Tell me how to execute docker stats with docker-java api
This works for me
public Statistics getNextStatistics() throws ProfilingException {
AsyncResultCallback<Statistics> callback = new AsyncResultCallback<>();
client.statsCmd(containerId).exec(callback);
Statistics stats;
try {
stats = callback.awaitResult();
callback.close();
} catch (RuntimeException | IOException e) {
// you may want to throw an exception here
}
return stats; // this may be null or invalid if the container has terminated
}
DockerClient is where we can establish a connection between a Docker engine/daemon and our application.
By default, the Docker daemon can only be accessible at the unix:///var/run/docker.sock file. We can locally communicate with the Docker engine listening on the Unix socket unless otherwise configured.
we can open a connection in two steps:
DefaultDockerClientConfig.Builder config
= DefaultDockerClientConfig.createDefaultConfigBuilder();
DockerClient dockerClient = DockerClientBuilder
.getInstance(config)
.build();
Since engines could rely on other characteristics, the client is also configurable with different conditions.
For example, the builder accepts a server URL, that is, we can update the connection value if the engine is available on port 2375:
DockerClient dockerClient
= DockerClientBuilder.getInstance("tcp://docker.baeldung.com:2375").build();
Note that we need to prepend the connection string with unix:// or tcp:// depending on the connection type.
Related
I'm trying to implement a TCP connection, everything works fine from the server's side but when I run the client program (from client computer) I get the following error:
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:432)
at java.net.Socket.connect(Socket.java:529)
at java.net.Socket.connect(Socket.java:478)
at java.net.Socket.<init>(Socket.java:375)
at java.net.Socket.<init>(Socket.java:189)
at TCPClient.main(TCPClient.java:13)
I tried changing the socket number in case it was in use but to no avail, does anyone know what is causing this error & how to fix it.
The Server Code:
//TCPServer.java
import java.io.*;
import java.net.*;
class TCPServer {
public static void main(String argv[]) throws Exception {
String fromclient;
String toclient;
ServerSocket Server = new ServerSocket(5000);
System.out.println("TCPServer Waiting for client on port 5000");
while (true) {
Socket connected = Server.accept();
System.out.println(" THE CLIENT" + " " + connected.getInetAddress()
+ ":" + connected.getPort() + " IS CONNECTED ");
BufferedReader inFromUser = new BufferedReader(
new InputStreamReader(System.in));
BufferedReader inFromClient = new BufferedReader(
new InputStreamReader(connected.getInputStream()));
PrintWriter outToClient = new PrintWriter(
connected.getOutputStream(), true);
while (true) {
System.out.println("SEND(Type Q or q to Quit):");
toclient = inFromUser.readLine();
if (toclient.equals("q") || toclient.equals("Q")) {
outToClient.println(toclient);
connected.close();
break;
} else {
outToClient.println(toclient);
}
fromclient = inFromClient.readLine();
if (fromclient.equals("q") || fromclient.equals("Q")) {
connected.close();
break;
} else {
System.out.println("RECIEVED:" + fromclient);
}
}
}
}
}
The Client Code:
//TCPClient.java
import java.io.*;
import java.net.*;
class TCPClient {
public static void main(String argv[]) throws Exception {
String FromServer;
String ToServer;
Socket clientSocket = new Socket("localhost", 5000);
BufferedReader inFromUser = new BufferedReader(new InputStreamReader(
System.in));
PrintWriter outToServer = new PrintWriter(
clientSocket.getOutputStream(), true);
BufferedReader inFromServer = new BufferedReader(new InputStreamReader(
clientSocket.getInputStream()));
while (true) {
FromServer = inFromServer.readLine();
if (FromServer.equals("q") || FromServer.equals("Q")) {
clientSocket.close();
break;
} else {
System.out.println("RECIEVED:" + FromServer);
System.out.println("SEND(Type Q or q to Quit):");
ToServer = inFromUser.readLine();
if (ToServer.equals("Q") || ToServer.equals("q")) {
outToServer.println(ToServer);
clientSocket.close();
break;
} else {
outToServer.println(ToServer);
}
}
}
}
}
This exception means that there is no service listening on the IP/port you are trying to connect to:
You are trying to connect to the wrong IP/Host or port.
You have not started your server.
Your server is not listening for connections.
On Windows servers, the listen backlog queue is full.
I would check:
Host name and port you're trying to connect to
The server side has managed to start listening correctly
There's no firewall blocking the connection
The simplest starting point is probably to try to connect manually from the client machine using telnet or Putty. If that succeeds, then the problem is in your client code. If it doesn't, you need to work out why it hasn't. Wireshark may help you on this front.
You have to connect your client socket to the remote ServerSocket. Instead of
Socket clientSocket = new Socket("localhost", 5000);
do
Socket clientSocket = new Socket(serverName, 5000);
The client must connect to serverName which should match the name or IP of the box on which your ServerSocket was instantiated (the name must be reachable from the client machine). BTW: It's not the name that is important, it's all about IP addresses...
I had the same problem, but running the Server before running the Client fixed it.
One point that I would like to add to the answers above is my experience-
"I hosted on my server on localhost and was trying to connect to it through an android emulator by specifying proper URL like http://localhost/my_api/login.php . And I was getting connection refused error"
Point to note - When I just went to browser on the PC and use the same URL (http://localhost/my_api/login.php) I was getting correct response
so the Problem in my case was the term localhost which I replaced with the IP for my server (as your server is hosted on your machine) which made it reachable from my emulator on the same PC.
To get IP for your local machine, you can use ipconfig command on cmd
you will get IPv4 something like 192.68.xx.yy
Voila ..that's your machine's IP where you have your server hosted.
use it then instead of localhost
http://192.168.72.66/my_api/login.php
Note - you won't be able to reach this private IP from any node outside this computer. (In case you need ,you can use Ngnix for that)
I had the same problem with Mqtt broker called vernemq.but solved it by adding the following.
$ sudo vmq-admin listener show
to show the list o allowed ips and ports for vernemq
$ sudo vmq-admin listener start port=1885 -a 0.0.0.0 --mountpoint /appname --nr_of_acceptors=10 --max_connections=20000
to add any ip and your new port. now u should be able to connect without any problem.
Hope it solves your problem.
Hope my experience may be useful to someone. I faced the problem with the same exception stack trace and I couldn't understand what the issue was. The Database server which I was trying to connect was running and the port was open and was accepting connections.
The issue was with internet connection. The internet connection that I was using was not allowed to connect to the corresponding server. When I changed the connection details, the issue got resolved.
In my case, I gave the socket the name of the server (in my case "raspberrypi"), and instead an IPv4 address made it, or to specify, IPv6 was broken (the name resolved to an IPv6)
In my case, I had to put a check mark near Expose daemon on tcp://localhost:2375 without TLS in docker setting (on the right side of the task bar, right click on docker, select setting)
i got this error because I closed ServerSocket inside a for loop that try to accept number of clients inside it (I did not finished accepting all clints)
so be careful where to close your Socket
I had same problem and the problem was that I was not closing socket object.After using socket.close(); problem solved.
This code works for me.
ClientDemo.java
public class ClientDemo {
public static void main(String[] args) throws UnknownHostException,
IOException {
Socket socket = new Socket("127.0.0.1", 55286);
OutputStreamWriter os = new OutputStreamWriter(socket.getOutputStream());
os.write("Santosh Karna");
os.flush();
socket.close();
}
}
and
ServerDemo.java
public class ServerDemo {
public static void main(String[] args) throws IOException {
System.out.println("server is started");
ServerSocket serverSocket= new ServerSocket(55286);
System.out.println("server is waiting");
Socket socket=serverSocket.accept();
System.out.println("Client connected");
BufferedReader reader=new BufferedReader(new InputStreamReader(socket.getInputStream()));
String str=reader.readLine();
System.out.println("Client data: "+str);
socket.close();
serverSocket.close();
}
}
I changed my DNS network and it fixed the problem
You probably didn't initialize the server or client is trying to connect to wrong ip/port.
Change local host to your ip address
localhost
//to you local ip
192.168.xxxx
I saw the same error message ""java.net.ConnectException: Connection refused" in SQuirreLSQL when it was trying to connect to a postgresql database through an ssh tunnel.
Example of opening tunel:
Example of error in Squirrel with Postgresql:
It was trying to connect to the wrong port. After entering the correct port, the process execution was successful.
See more options to fix this error at: https://stackoverflow.com/a/6876306/5857023
In my case, with server written in c# and client written in Java, I resolved it by specifying hostname as 'localhost' in the server, and '[::1]' in the client. I don't know why that is, but specifying 'localhost' in the client did not work.
Supposedly these are synonyms in many ways, but apparently, not not a 100% match. Hope it helps someone avoid a headache.
For those who are experiencing the same problem and use Spring framework, I would suggest to check an http connection provider configuration. I mean RestTemplate, WebClient, etc.
In my case there was a problem with configured RestTemplate (it's just an example):
public RestTemplate localRestTemplate() {
Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress("localhost", <some port>));
SimpleClientHttpRequestFactory clientHttpReq = new SimpleClientHttpRequestFactory();
clientHttpReq.setProxy(proxy);
return new RestTemplate(clientHttpReq);
}
I just simplified configuration to:
public RestTemplate restTemplate() {
return new RestTemplate(new SimpleClientHttpRequestFactory());
}
And it started to work properly.
There is a service called MySQL80 that should be running to connect to the database
for windows you can access it by searching for services than look for MySQL80 service and make sure it is running
It could be that there is a previous instance of the client still running and listening on port 5000.
I'm trying to create a SSH tunnel into a compute instance, from an environment that doesn't have gcloud installed (App Engine Standard NodeJS Environment).
What are the steps needed to do that? How does gcloud compute ssh command does it? Is there a NodeJS library that already does it?
I created the package gcloud-ssh-tunnel that does the necessary steps:
Create a private/public key using sshpk
Imports the public key using the OS Login API
SSH using ssh2 (and specifically create a tunnel, because this was the use case I needed - see the Why? section in the package)
Delete the public key using the OS Login API (to not overflow the account or leave security access)
You can use ssh2 to do that in nodejs.
"gcloud compute ssh" generates persistent SSH keys for the user. The public key is stored in project or instance SSH keys metadata, and the Guest Environment creates the necessary local user and places ~/.ssh/authorized_keys in its home directory.
You can manually add your public key to the instance, and then connect to it via ssh using a node ssh library1.
Or you can set a startup script for the instance when you are creating it2.
As Cloud Ace pointed out, you can use the ssh2 module3 for node.js compatibility.
In order to SSH into a GCP instance you have to:
Enable OS Login
Create a service account and assign it "Compute OS Admin Login" role.
Create SSH key and import it into the service account.
Use that SSH key and POSIX username.
The first 2 steps already link to the documentation.
Create SSH key:
import {
generatePrivateKey,
} from 'sshpk';
const keyPair = generatePrivateKey('ecdsa');
const privateKey = keyPair.toString();
const publicKey = keyPair.toPublic().toString();
Import key:
const osLoginServiceClient = new OsLoginServiceClient({
credentials: googleCredentials,
});
const [result] = await osLoginServiceClient.importSshPublicKey({
parent: osLoginServiceClient.userPath(googleCredentials.client_email),
sshPublicKey: {
expirationTimeUsec: ((Date.now() + 10 * 60 * 1_000) * 1_000).toString(),
key: publicKey,
},
});
SSH using the key:
const ssh = new NodeSSH();
await ssh.connect({
host,
privateKey,
username: loginProfile.posixAccounts[0].username,
});
In this example, I am using node-ssh but you can use anything.
The only other catch is that you need to figure out the public host. Implementation for that looks like this:
const findFirstPublicIp = async (
googleCredentials: GoogleCredentials,
googleZone: string,
googleProjectId: string,
instanceName: string,
) => {
const instancesClient = new InstancesClient({
credentials: googleCredentials,
});
const instances = await instancesClient.get({
instance: instanceName,
project: googleProjectId,
zone: googleZone,
});
for (const instance of instances) {
if (!instance || !('networkInterfaces' in instance) || !instance.networkInterfaces) {
throw new Error('Unexpected result.');
}
for (const networkInterface of instance.networkInterfaces) {
if (!networkInterface || !('accessConfigs' in networkInterface) || !networkInterface.accessConfigs) {
throw new Error('Unexpected result.');
}
for (const accessConfig of networkInterface.accessConfigs) {
if (accessConfig.natIP) {
return accessConfig.natIP;
}
}
}
}
throw new Error('Could not locate public instance IP address.');
};
Finally, to clean up, you have to call deleteSshPublicKey with the name of the key that you've imported:
const fingerprint = crypto
.createHash('sha256')
.update(publicKey)
.digest('hex');
const sshPublicKey = loginProfile.sshPublicKeys?.[fingerprint];
if (!sshPublicKey) {
throw new Error('Could not locate SSH public key with a matching fingerprint.');
}
const ssh = new NodeSSH();
await ssh.connect({
host,
privateKey,
username: loginProfile.posixAccounts[0].username,
});
await osLoginServiceClient.deleteSshPublicKey({
name: sshPublicKey.name,
});
In general, you'd need to reserve & assign a static external IP address to begin with (unless trying to SSH from within the same network). And a firewall rule needs to be defined for port tcp/22, which then can be applied as a "label" to the network interface, which has that external IP assigned.
The other way around works with gcloud app instances ssh:
SSH into the VM of an App Engine Flexible instance
which might be less effort & cost to setup, because a GCP VM usually has gcloud installed.
I've a simple rest api built with express, knex and bookshelf.
I'm doing some performance test with Jmeter and I've noticed that if I call the API that perform the following query there is no problem:
public static async fetchById(id: number): Promise<DatasetStats> {
return DatasetStats.where<DatasetStats>({ id }).fetch();
}
DatasetStats is a Bookshelf model
But If I set Jmeter to call the following I got a Error: ER_CON_COUNT_ERROR: Too many connections after a minute:
import * as knex from 'knex';
#injectable()
export class MyRepo {
private knex: knex;
constructor() {this.knex = knex(DatabaseConfig); }
async fetchResourcesList(datasetName: string): Promise<any> {
return this.knex.distinct('resource').from(datasetName);
}
}
The problem could be that I create a knex object for each request?
Yes. If you create new knex instance for each request, you cannot control total number of concurrent connections to the mysql db. Also you won't be able to re-use already open connections from knex's connection pool, so it is highly inefficient to open new TCP connection to the database on every query. Also if you don't destroy your knex instances after the query, connections will be left open until some idle timeouts + app will leak memory.
I'm creating a back-end server application in Dart which is using a MySQL database to store data. To make the SQL call I'm using the ConnectionPool from SqlJocky.
What I do when the app starts:
Create a singleton which store the ConnectionPool
Execute multiple queries with prepareExecute and query
Locally this approach is working fine. Now I pushed a development version to Heroku and I'm getting connection issues after a few minutes.
So I wonder, do I need to close/release a single connection from the pool I use to execute a query? Or is the connection after the query placed again in the pool and free for use?
The abstract base class for all the MySQL stores:
abstract class MySQLStore {
MySQLStore(ConnectionPool connectionPool) {
this._connectionPool = connectionPool;
}
ConnectionPool get connectionPool => this._connectionPool;
ConnectionPool _connectionPool;
}
A concrete implementation for the method getAll:
Future<List<T>> getAll() async {
Completer completer = new Completer();
connectionPool.query("SELECT id, name, description FROM role").then((result) {
return result.toList();
}).then((rows) {
completer.complete(this._processRows(rows));
}).catchError((error) {
// TODO: Better error handling.
print(error);
completer.complete(null);
});
return completer.future;
}
The error I get:
SocketException: OS Error: Connection timed out, errno = 110, address = ...
This doesn't fully answer your question but I think you could simplify your code like:
Future<List<T>> getAll() async {
try {
var result = await connectionPool.query(
"SELECT id, name, description FROM role");
return this._processRows(await result.toList());
} catch(error) {
// TODO: Better error handling.
print(error);
return null;
}
}
I'm sure here is no need to close a connection with query. I don't know about prepareExecute though.
According to a comment in the SqlJocky code it can take quite some time for a connection to be released by the database server.
Maybe you need to increase the connection pool size (default 5) so you don't run out of connections while ConnectionPool is waiting for connections to be released.
After some feedback from Heroku I managed to resolve this problem by implementing a timer task that does every 50 seconds a basic MySQL call.
The response from Heroku:
Heroku's networking enforces an idle timeout of 60-90 seconds to prevent runaway processes. If you're using persistent connections in your application, make sure that you're sending a keep-alive at, say, 55 seconds to prevent your open connection from being dropped by the server.
The work around code:
const duration = const Duration(seconds: 50);
new Timer.periodic(duration, (Timer t) {
// Do a simple MySQL call with on the connection pool.
this.connectionPool.execute('SELECT id from role');
print('*** Keep alive triggered for MySQL heroku ***');
});
I'm all new in couchbase, I copy/paste the first code example in (http://www.couchbase.com/communities/java/getting-started) in my Eclipse project but when I run it and check back the server i can't find the document, below is console output:
2014-03-16 17:30:42.390 INFO com.couchbase.client.CouchbaseConnection: Added {QA sa=/127.0.0.1:11210, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue
2014-03-16 17:30:42.390 INFO com.couchbase.client.CouchbaseClient: CouchbaseConnectionFactory{, bucket='trust', nodes=[http://localhost:8091/pools], order=RANDOM, opTimeout=2500, opQueue=16384, opQueueBlockTime=10000, obsPollInt=10, obsPollMax=500, obsTimeout=5000, viewConns=10, viewTimeout=75000, viewWorkers=1, configCheck=10, reconnectInt=1100, failureMode=Redistribute, hashAlgo=NATIVE_HASH}
2014-03-16 17:30:42.390 INFO com.couchbase.client.CouchbaseConnection: Connection state changed for sun.nio.ch.SelectionKeyImpl#f8ae79
2014-03-16 17:30:42.468 INFO com.couchbase.client.CouchbaseClient: viewmode property isn't defined. Setting viewmode to production mode
2014-03-16 17:30:42.703 INFO net.spy.memcached.auth.AuthThread: Authenticated to localhost/127.0.0.1:11210
and here is my java class:
public class Test_ {
public static void main(String[] args) throws Exception {
// (Subset) of nodes in the cluster to establish a connection
List hosts = Arrays.asList(new URI("http://localhost:8091/pools"));
// Name of the Bucket to connect to
String bucket = "trust";
// Password of the bucket (empty) string if none
String password = "HIDDEN";
// Connect to the Cluster
CouchbaseClient client = new CouchbaseClient(hosts, bucket, password);
// Store a Document
client.set("33", "test my java code").get();
// Retreive the Document and print it
//System.out.println(client.get("33"));
// Shutting down properly
client.shutdown();
}
}
Solved, it was just the firewall blocking the port 11210.