All,
I have a Hudson running fine, but I wanted to get exact similar setup in another test machine so I tried following steps:
Copied hudson_home from my current server to test server (Environment is identical to my current machine), with all jobs. Plugins and builds as well.
I restarted the Hudson and Tomcat but and its trying to access all of the config files under Hudson but its not able to fetch any data, trying to connect slaves and throwing some exceptions like this any advice is helpful
:
java.io.EOFException: unexpected stream termination
at hudson.remoting.Channel.<init>(Channel.java:284)
at hudson.slaves.SlaveComputer.setChannel(SlaveComputer.java:270)
at hudson.slaves.CommandLauncher.launch(CommandLauncher.java:111)
at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:169)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
If all else fails, try using the Backup plugin to backup your Hudson configuration settings and restore them elsewhere.
Related
My organization asked our team to use this new tool AppDynamics for better performance testing results and reports.
For that I have to attach javaagent with running jvm, on their community this step
java -Xbootclasspath/a:<path_to_jdk>/lib/tools.jar -jar
/<agent_home>/javaagent.jar <jvm_process_id>
is given to attach the javaagent with running jvm.
However when I run the same I get following result on cmd (Using windows-8 64 bit)
>Attaching to VM [6616]
java.lang.reflect.InvocationTargetException
Caused by: java.io.IOException: no such process
Exception in thread "main" java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
Caused by: java.lang.reflect.InvocationTargetException
Caused by: java.io.IOException: no such process
This is the link of their documentation.
The problem with this approach is you'll have to manually do that each time. I would highly recommend just configuring your app server to automatically load the AppDynamics agent. Another option is using the universal agent, which does auto-attach: https://docs.appdynamics.com/display/PRO43/Install+the+Universal+Agent Doing this one off attach is never really a good idea, as you'll have to get the PID each time.
The error indicates that you are probably not running the attach as the same user the JVM is running under, but it could also be permissions or something else as well, hence I would use the methods that work all the time :)
SnappyData v0.5
My goal is to start a "spark-shell" from my SnappyData install's /bin directory and issue Scala commands against existing tables in my SnappyData store.
I am on the same host as my SnappyData store, locator, and lead (and yes, they are all running).
To do this, I am running this command as per the documentation here:
Connecting to a Cluster with spark-shell
~/snappydata/bin$ spark-shell --master local[*] --conf snappydata.store.locators=10.0.18.66:1527 --conf spark.ui.port=4041
I get this error trying to create a spark-shell to my store:
[TRACE 2016/08/12 15:21:55.183 UTC GFXD:error:FabricServiceAPI
tid=0x1] XJ040 error occurred while starting server :
java.sql.SQLException(XJ040): Failed to start datab
ase 'snappydata', see the cause for details.
java.sql.SQLException(XJ040): Failed to start database 'snappydata',
see the cause for details.
at com.pivotal.gemfirexd.internal.impl.jdbc.SQLExceptionFactory40.getSQLException(SQLExceptionFactory40.java:124)
at com.pivotal.gemfirexd.internal.impl.jdbc.Util.newEmbedSQLException(Util.java:110)
at com.pivotal.gemfirexd.internal.impl.jdbc.Util.newEmbedSQLException(Util.java:136)
at com.pivotal.gemfirexd.internal.impl.jdbc.Util.generateCsSQLException(Util.java:245)
at com.pivotal.gemfirexd.internal.impl.jdbc.EmbedConnection.bootDatabase(EmbedConnection.java:3380)
at com.pivotal.gemfirexd.internal.impl.jdbc.EmbedConnection.(EmbedConnection.java:450)
at com.pivotal.gemfirexd.internal.impl.jdbc.EmbedConnection30.(EmbedConnection30.java:94)
at com.pivotal.gemfirexd.internal.impl.jdbc.EmbedConnection40.(EmbedConnection40.java:75)
at com.pivotal.gemfirexd.internal.jdbc.Driver40.getNewEmbedConnection(Driver40.java:95)
at com.pivotal.gemfirexd.internal.jdbc.InternalDriver.connect(InternalDriver.java:351)
at com.pivotal.gemfirexd.internal.jdbc.InternalDriver.connect(InternalDriver.java:219)
at com.pivotal.gemfirexd.internal.jdbc.InternalDriver.connect(InternalDriver.java:195)
at com.pivotal.gemfirexd.internal.jdbc.AutoloadedDriver.connect(AutoloadedDriver.java:141)
at com.pivotal.gemfirexd.internal.engine.fabricservice.FabricServiceImpl.startImpl(FabricServiceImpl.java:290)
at com.pivotal.gemfirexd.internal.engine.fabricservice.FabricServerImpl.start(FabricServerImpl.java:60)
at io.snappydata.impl.ServerImpl.start(ServerImpl.scala:32)
Caused by: com.gemstone.gemfire.GemFireConfigException: Unable to
contact a Locator service (timeout=5000ms). Operation either timed out
or Locator does not exist. Configured list of
locators is "[dev-snappydata-1(null):1527]".
at com.gemstone.gemfire.distributed.internal.membership.jgroup.GFJGBasicAdapter.getGemFireConfigException(GFJGBasicAdapter.java:533)
at com.gemstone.org.jgroups.protocols.TCPGOSSIP.sendGetMembersRequest(TCPGOSSIP.java:212)
at com.gemstone.org.jgroups.protocols.PingSender.run(PingSender.java:82)
at java.lang.Thread.run(Thread.java:745)
hmm! I assume you are trying the Spark-shell from your desktop and connecting to the cluster in AWS?
Not sure this is going to work because the local JVM launched by spark-shell will attempt to connect to the p2p cluster in Snappydata which is not likely to work.
Snappy-shell on the other hand merely uses the JDBC client to connect (and, hence will work).
And, you cannot use the locator client port (1527), anyway. See here
Can you try with snappydata.store.locators=10.0.18.66:10334 NOT 1527 as the port ? Unlikely this will work but worth a try.
Maybe there is a way to open up all ports and access to these nodes on AWS. Not recommended for production, though.
I am curious for other responses from the engg team.
Until then, you may have to start the spark-shell from within the network (AWS node).
I'm running jenkins on an AWS EC2 server, accessing a mysql DB on AWS RDS.
When running mvn clean install locally it's all good, but when jenkins does it on the EC2 server i get this error:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test (default-test) on project plenty: Execution default-test of goal org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test failed: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd /var/lib/jenkins/jobs/Plenty-api/workspace/plenty && /usr/lib/jvm/java-7-oracle/jre/bin/java -XX:MaxPermSize=128m -jar /var/lib/jenkins/jobs/Plenty-api/workspace/plenty/target/surefire/surefirebooter9140193949835996613.jar /var/lib/jenkins/jobs/Plenty-api/workspace/plenty/target/surefire/surefire21120104041831905tmp /var/lib/jenkins/jobs/Plenty-api/workspace/plenty/target/surefire/surefire_08001606298289854825tmp
any ideas?
Oh yeah that one scales from EasyPeasyException (the call to System.exit(); by one of your testclasses) to the complete DeepShitException (I had it once locally, reinstalled the VM and everything was good afterwards).
If it is not one of your testclasses (or dependencies of those) calling a System.exit() you will have to find the dumps of the VM crash and add them to the question for further assistance (I unfortunately wont be able to handle whatever is stored in such dump files - but finding a solution is oftenly teamwork :)).
This is also documented in the plugins documentation: http://maven.apache.org/surefire/maven-surefire-plugin/faq.html#vm-termination
When trying to restore my backuped TFSCollection via Administration Console (attach collection) the attachment fails. (NullReference Exception)
Are there any problems when restoring collections without the original configuration database?
The Collections is listed, but state doesn`t change from "offline", even when i tried to re-run the job.
When looking into [TFSCollection].[tbl_projects] via Management Studio I can see all projects.
When trying to restore the collection via
TFSConfig RegisterDB /sqlInstance:DataCenter\MSSQLServer /databaseName:DataCenter\MSSQLServer;Tfs_Configuration
I get the following error:
The following exception was caught while trying to validate the database: Keyword not supported: 'tfs_configuration;integrated security'.
Or when i run:
TFSConfig recover /ConfigurationDB:DataCenter\MSSQLServer;Tfs_Configuration /CollectionDB:DataCenter\MSSQLServer;Tfs_TFSCollection
The following error occurs:
TF246017: Team Foundation Server could not connect to the database. Verify that
the server that is hosting the database is operational, and that network problem
s are not blocking communication with the server.
Maybe I´ve run into some serious problem when installing TFS Service Pack 1.
What could i do to restore my TFS?
Even the import in my virtual enviroment fails. See the logfile.
If anyone had the same problem, take a look at http://support.microsoft.com/kb/2516423.
If you cant access your collections you could try.
TfsConfig updates /reapply
To start the TFS Job Agent you can run.
TfsServiceControl unquiesce
Or if the "Serviving" State doesnt change you could try this.
TfsConfig diagnose /scope:updates
TfsConfig updates /reapply
Hope this could help..
I have a SSIS 2005 package that is up and running in our production environment. The package uses an SMTP Connection Manager to send an e-mail message out to a designated user. We have a scheduled job that executes this package, and also overrides the SMTP connection string so that the package can target the test or production mail server, which makes it possible to keep a single service on both our test and production servers, just configured differently.
We recently changed the server name of our production mail server and went into the scheduled job and changed the command prompt values that run to point to the new server. However, the next morning the job failed and the error log indicated that the job tried to connect to the old mail server.
Is there something I'm missing to updating the SSIS package parameters? Do I have to delete the existing package, and then reimport and reschedule the job again in order for the new server change to hold?
The dba's from where I work had a similar issue. They had to change a job's running parameters, but it seems like running the job with modified parameters only worked the first time they ran it. After that run, it kept using the old values in subsequent runs. They had to repackage the damned thing.
If you are using the package configuration feature? It can be picky on the order of execution. There's some more info here: http://msdn.microsoft.com/en-us/library/ms141132.aspx