Tensorboard 2.2.1 Windows 10- No Dashboard Active for the Current Dataset - deep-learning

I have run a Keras code and generated log files for tensorboard. Tensorboard extension is not loading in Jupyter Notebook. Hence i tried to launch Tensorboard from the Command Prompt. But i always get the message that No dashboards are active for the current data set.
Here are my files generated
I ran the command
Error:
I tried couple of hacks from google but nothing worked.
EDIT1:
I installed protbuf using pip and was able to load tensorboard after i restarted my machine. However now i have another issue.
I defined Log Dir:
%load_ext tensorboard
This is creating the right log files and when i start the session in Jupyter it does not load the first time. If i run it again, it loads but says no dashboard available. If i restart my machine and launch tensorboard from Command Prompt again, this works for the same log directory

Related

Packer - vSphere-iso - Floppy upload to datastore results in broken pipe or 404 err

I'm struggling to upload a floppy image (same goes with cd_rom image) from Packer using the vsphere-iso plugin.
I was able to deploy a Linux ISO file located in a datacenter within vCenter and it works well.
As soon as I have some provisioning using floppy of cdrom images that needs to be uploaded to the datastore, it fails.
I can successfully upload the files manually from the vSphere Client UI which means vCenter privileges are just fine for my user (I'm not full admin on the ESXi).
Using vshere-iso builder to deploy an ISO file available in the ESXi datastore, provisioning it with a floppy image for the OS installation.
Datastore ISO is correctly detected and mounted but the build fails everytime during floppy image upload.
Please excuse any information I could have missed that would be required to troubleshoot
Any idea or help is very welcomed.
Reported on Packer github as well: https://github.com/hashicorp/packer/issues/11655
Thank you !
Overview of the Issue
Reproduction Steps
Run following command using the builder below:
packer build -debug -var 'username=xxx' -var 'password=yyyy' .
Randomly, one build outputs a **404 Not found** error and one build outputs a **write tcp 10.1.21.208:57236->10.1.11.230:443: write: broken pipe**
In every case, the HTTP request that seems to fail is:
Put "https://<host>/folder/<vm-name-folder>/packer-tmp-created-floppy.flp?dcPath=<datacenter>&dsName=<datastore>
Packer version
1.8.0
Simplified Packer Template
packer-template.pkr.hcl
Operating system and Environment details
Ubuntu 20.04.4 LTS
vCenter version 6.7
Log Fragments and crash.log files
packer-broken-pipe-error.log
packer-404-not-found.log

pyqtdeploy: Unable to detect MSVC2015 or MSVC2017

I'm trying pyqtdeploy for the first time, following the docs.
I'm getting the following error when running build-demo.py:
C:\Users\Administrator\AppData\Local\Programs\Python\Python36-32\Lib\site-packag
es\pyqtdeploy\demo>python build-demo.py
pyqtdeploy-sysroot: Unable to detect MSVC2015 or MSVC2017.
The py file seems to be getting Environment variables from the os module, as running the same command in python console it works fine. Somehow pyqtdeploy is having a problem with this.
I have the build tools installed in the system; what am I missing here?
You must search the location of the vcvars64.bat in your Build Tools' folder location and copy it. It depends on your system environment and the version of Visual Studio installed.
Before run pyqtdeploy script, paste that location in the command prompt. This will initialize the Environment and enable the detection of MSVC x64.
Step 1 : Download all the required packages ....
pic1
Step 2 : Browse to the directory where Micosoft Visual Studio is installed
pic2
Step 3: Search for vcvars64.bat in that directory
pic3
Step 4 : Run vcvars64.bat and if it successful, run pyqtdeploy-sysroot sysroot.json
pic4

LibGDX creating (desktop) platform runnable

I want to export may project (game) to different desktop platforms. I exported it from eclipse (on Windows) and I get JAR file. On my machine I can start it, but on other it won't work. I guess JRE is missing there.
So, I followed LibGDX instructions how to deploy on different platforms:
https://github.com/libgdx/libgdx/wiki/Deploying-your-application
But when I run that packr.jar app I get following output:
D:\packing>java -jar packr.jar windows.json
Output directory 'D:\packing\windows' exists, deleting
Unpacking JRE
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
copying resources
minimizing JRE
unpacking rt.jar
packing rt.jar
Done!
After that I really get my exe file, all together with jre dir, my original game jar file and some config.json file, but that exe file just won't run. No failure message, just won't run. Any idea what's going on here? Or maybe there is some other tool for packing jar files?
All I need is to make my game runnable on desktop platforms: Windows, Mac & Linux.
For windows OS you could use something like launch4j which simply puts a wrapper around your jar file.
It also has the capability to provide a given jre. So your users do not need to have one installed.
Ok, solved this.
So I've found out that if I run exe file from console (cmd) and redirect output to file I can have some error report regarding the run attempt. So I did that:
myapp.exe > log.txt
and get log this log file:
Loading JVM runtime library ...
Passing VM options ...
# -Xmx1G
Creating Java VM ...
Error occurred during initialization of VM
Unable to load ZIP library: D:\packing\windows\jre\bin\zip.dll
Zip file was there, but something was wrong with it so I replaced it with one I had in my Java installation (my was larger). After that I was able to run exe file well.

Jenkins doesn't launch the application under test on chromebrowser

I ran into an issue with Jenkins which I've never seen before and I thought I'll get some advice. Jenkins wouldn’t launch the AUT on the chrome browser for running selenium tests.
Steps that I followed:
A Jenkins Master and Slave are setup on the same machine. Not as a windows service, but I launch them manually via command prompt
I setup a project on the Slave node with 2 build steps. One for the MSBuild (I dowloaded the plugin) to build the solution and the second step for executing the windows batch command that will start the tests
I also have a TFS plugin to fetch the server version of the solution to build on Jenkins
So when I build the job on Jenkins Slave,
The solution gets built successfully without any errors
Then for the next build step, Jenkins executes the windows batch command and loads the .dll file. Says “starting execution..”
Chromedriver launches. It opens up the chrome browser
But the chrome browser wouldn’t launch the AUT. It just tries to load it and stays intact indefinitely until my Jenkins job times out
With all this happening, my CPU utilization is at 100%. The browser that runs the Jenkins UI on the local host and Java.exe*32 consumes it to the fullest
I ran the exact same MSTest.exe command (that I entered in the build step) in command prompt when Jenkins is not running and it launches the AUT successfully and tests ran
I ran the exact same MSTest.exe command (that I entered in the build step) in command prompt when Jenkins is running. It again spikes the CPU to 100% and AUT never launches
Any thoughts?
I was also running into this issue and solved it as follows.
Basically the jenkins slave has to be started from the startup through a batch job.
Here is the step by step process.
Node URL : http://host:port/computer/nodeName/
Go to the node "Node URL"
Click on "Mark this node temporarily offline"
Go the the machine where slave is running.
Open command prompt in admin mode.
cd to the location where jenkins is installed
Execute jenkins-slave uninstall
Go to services (type services in run) and stop the jenkins slave running
Restart the machine.
cd C:\Users\myUserName\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup
Create a new batch job (name say LaunchJenkinsSlave.bat) with following content
>
java -jar C:/Jenkins/slave.jar -jnlpUrl http://host:port/computer/nodeName/slave-agent.jnlp -secret yourSecret
netsh advfirewall firewall set rule group="remote desktop" new enable=Yes
<<
fyi : You can refer jenkins-slave.xml in your Jenkins install location for yourSecret, nodeName, host ect if you forgot.
Restart your machine.
Observation : Jenkins slave will be started automatically
12. Go to the "Node URL"
and bring the node back online.
Hope this helps.

Binary file refuses to run due to a missing shared library

I tried building recutils version 1.7 downloaded from the home page, using the standard configure, make, sudo make install sequence, but when trying to run the resulting binaries. like recinf, I get the error:
recinf: error while loading shared libraries: librec.so.1: cannot open shared object file: No such file or directory
Does this mean I made a mistake during the build or is the package itself in error?
As Etan Reisner said the problem was that the shared object libraries were installed but not loaded into the cache, hence the need to run ldconfig. After running
sudo ldconfig
the binaries ran properly. If I had looked in /usr/local/lib, I would have seen the libs there.