Override DNS lookup using mockito during junit java test - junit

Is it possible to override a single DNS lookup in the java jvm?
I am running a junit test in java. The test makes a connection to an external server host1. Instead, I want the jvm to contact localhost when it tries to contact host1.
The test succeeds if /etc/hosts file contains:
127.0.0.1 host1
I am wondering if it can be done without modifying the hosts file.
We are using java 8. The junit test uses mockito & powermock libraries. It will run on Mac OS Mojave & CentOS (some container in some kubernetes pod on gcp/aws).
Thanks for reading.

You may want to look at poisoning your user environment instead of the system /etc/hosts by tricking the system to load another /etc/hosts. See this for a Linux solution: https://unix.stackexchange.com/questions/57459/how-can-i-override-the-etc-hosts-file-at-user-level

Related

Openshift OKD 4.5 on VMware

I am getting the connection time out when running the command in bootstrap.
Any configuration suggestions on networking part if I am missing
It’s says kubernetes api calling time out
This is obviously very hard to debug without having access to your environment. Some tips to debug the OKD installation:
Before starting the installation, make sure your environment meets all the prerequisites. Often, the problem lies with a faulty DNS / DHCP / networking setup. Potentially deploy a separate VM into the network to check if everything works as expected.
The bootstrap node and the Master Nodes are deployed with the SSH key you specify, so in vCenter, get the IP of the machines that are already deployed and use SSH to connect to them. Once on the machine, use sudo crictl ps and sudo crictl logs <container-id> to review the logs for the running containers, focussing on the components:
kube-apiserver
etcd
machine-controller
In your case, the API is not coming up, so reviewing the logs of the above components will likely show the root cause.

How to start the tutorials-IoT.Sensors services to start in linux instance in FIWARE Lab

I recently deployed an instance of Ubuntu 16.04 on FIWARE Lab and accessed it using putty, I downloaded docker & docker-compose, I successfully installed fiware-orion & mongo-db as I followed the tutorial, I tried to follow the iot sensor tutorial but whenever I try to start the service it keeps stucking in this infinte loop -> Context Broker HTTP state : 000 (waiting for 200).
Any suggestions?
Details
region:crete
image: ubuntu 16.04
putty infinite loop
The problem was that the docker-compose did not include Orion (and MongoDB) instance which are required dependencies for this tutorial. We have updated the corresponding docker-compose file in order to include both dependencies and now it is working properly. Tips: do not forget to open the corresponding port (3000) in the security and assign a floating IP to the virtual machine to access to the /device/monitor (do not use localhost for accessing it).

How to configure a new host and virtual machine on opennebula?

We're using OpenNebula to simulate a simple replicated JBoss application.
We've installed all opennebula packages, qemu and kvm and libvirt.
We have created a simple ethernet network ad hoc between my pc (a node) and the one of my friend (which is both node and front-end) by plugging an ethernet cable between me and him (10.0.0.1 and 10.0.0.2).
So we can ping each other correctly, we've set everything to that we can ssh without a password to each other with "oneadmin" user.
We've configured all files such as below:
/etc/libvirt/libvirtd.conf
/etc/default/libvirtd-bin
And so on...
kvm and kvm-intel are both enabled.
The daemon
libvirtd -d -l
seems to start correctly.
In fact, from the gui of opennebula in the front end, we can see both the hosts monitored.
Anyway there's a problem when we try to start the virtual machine on the node which is not the front-end. I mean when we try to do a deploy of a VM on the other node. The error is something like this
cannot stat `/var/lib/one/datastores/1/f5394317d377beaa09fc07697df9ff68
but if, from the front end which has virtual machine n°1 we perform,
cd /var/lib/one/datastores/1
then we can see that file, we've also given all the permissions to it...
Any idea? :(
This may be related with the datastore configuration. If you left the default values, OpenNebula expects a shared filesystem (ie NFS) between the front-end and the virtualization nodes.
More context on the error (which I believe can be found in /var/lib/one/oned.log) would help analysing this problem.

JMeter - trouble configuring payload for jmeter-server test connecting over SSH

I'm tearing my hair out over a JMeter config issue. I'm running JMeter on a dedicated injection server, using the gui on my local box to control the tests [EDIT: The connection is SSH. The client is Windows 7 and the server is Linux). I've run the tests from my local box and I confirmed that they're working correctly from there. I put the payload (text files containing one JSON object each) on to the injection server and changed the Publisher configuration in the message source section so the path pointed to the files on there and...nothing.
This is the only output I get:
2012/09/24 14:26:50 INFO - jmeter.engine.ClientJMeterEngine: running clientengine run method
2012/09/24 14:26:50 INFO - jmeter.samplers.StandardSampleSender: Using StandardSampleSender for this test run
2012/09/24 14:26:50 INFO - jmeter.samplers.StandardSampleSender: Using StandardSampleSender for this test run
2012/09/24 14:26:50 INFO - jmeter.engine.ClientJMeterEngine: sent test to <IP_ADDRESS_OBSCURED> basedir='.'
2012/09/24 14:26:50 INFO - jmeter.engine.ClientJMeterEngine: Sending properties {}
2012/09/24 14:26:50 INFO - jmeter.engine.ClientJMeterEngine: sent run command to <IP_ADDRESS_OBSCURED>
I don't know what I'm doing wrong. I tried Apache's highly comprehensive documentation, but surprisingly there's nothing at all about this. How should I be configuring the path to the payload on the server?
Coincidentally, I solved this one today and was on my way home to post the answer. The important thing to note is that the tests weren't running at all. The server reported stop-start but the tests weren't running. This is why:
I was using a JMS Producer sampler and connecting over SSH. This was part of the problem. In order to connect to a remote SSH server, it's necessary first to create an SSH tunnel, then start the JMeter server and client with special parameters. The process is described in this helpful and concise blog post:
http://blog.ionelmc.ro/2012/02/16/how-to-run-jmeter-over-ssh-tunnel/
The second mistake I was making was that I was running the server on a Linux box (CentOS) and the client on a Windows 7 desktop. It's not recommended to do this, but I didn't realise that it'd stop the test from running. I dropped a Linux VM on my windows box, ran the tests from there and everything worked perfectly.

What is the difference between using Glassfish Server -> Local and Remote

I am using Intellij IDEA to develop my applications and I use glassfish for my applications.
When I want to run/debug my application I can configure it from Glassfish Server -> Local and define arguments at there. However there is another section instead of Glassfish Server, there is a Remote section for configuration. I can easily configure and debug my application just defining host and port variables.
So my question is why to need for Glassfish Server Local configuration(except for when defining extra parameters) and what is difference between them(I mean performance or etc.)?
There are a number of development work-flow optimizations and automation that can be performed by an IDE when it is working with a local server. I don't have a strong background in IDEA, so I am not sure which of the following they may have implemented:
using in-place|exploded|directory deployment can eliminate jar/war/ear creation in the IDE and deconstruction in the server. This can be a significant time saver.
linked to 1 is smarter redeployment. In some cases, a file change (like changing a jsp or an html file) does not need to trigger redeployment.
JDBC driver integration allows users to configure their IDE to access a DB and then propagates that configuration (which usually includes driver jars, etc.) into the server's classpath as part of deployment of an app.
access to server log files during deployment and execution.
The ability to start and stop the server... even today, you do need to restart GlassFish sometimes.
view the generated Java sources of a JSP.
Most of these features are not available with a remote server and that has a negative effect on iterative development since the break between edit and validate can be fairly long.
This answer is based on my familiarity with the work that we have done for the NetBeans/GlassFish integration. The guys at IntelliJ are smart, so I would not be surprised if they have other features that are available when you are working with a local server.
Local starts Glassfish for you and performs the deployment. With Remote you start Glassfish manually. Remote can be used to debug apps running on another machines, Local is useful for development and testing.