h2o models on Azure ML Containers - containers

I have a requirement to deploy h2o models on Azure . I have successfully handled sklearn models but for sklearn the dependencies in my view are easier . For h2o the java runtime dependency is my bottle-neck.
Will the container that i create will have java runtime ?> Else what are the suggested strategies ?
Should I go for a VM instead ?
Thanks,

Related

Load package from SQL Server

I used following article to execute SSIS package parallel.https://www.sqlservercentral.com/articles/importing-files-in-parallel-with-ssis. In this article it explained execute a package from a folder location. In my situation I am deploying both packages. I tried following code:
Application app = new Application();
Package pkg = app.LoadFromSqlServer(dtsxPackage, "localhost",null, null, null);
I am getting error
Cannot find folder "Package name"
Package deployment is as follows.
Using "ParallelExecusion.dtsx" I am try to execute the "FileSync.dtsx" package. I am setting the package path as "FileSync\TeamR\FileSync.dtsx"
The code shown is for loading a package that is stored in the SQL Server database named msdb. It will use the binaries in sys.dtspackages90 or sys.ssispackages (table names approximate) but that only works for packages developed and deployed under the Package Deployment Model (2005-2008R2) or projects that are explicitly defined as such for SQL Server 2012+
What your screenshot shows is a Project Deployment Model which is a .ispac deployed to the SSISDB database. While that package is on SQL Server, you do not use the LoadFromSqlServer method. Instead, you're going to use the same-ish methods that the CLR methods in the database use.
CreateSsisServerExecution
Set any Parameter/Property values
Start
Personally, unless I had a strong use case that I needed to control every aspect of the package execution, I'd just use TSQL here (and remove class dependencies in your code) to Execute SSIS Packages

Scaffolding a MySQL view using Pomelo and .Net Core 2.1

Apparently, with .Net Core 2.1, views are now supported. I was wondering if it is possible to scaffold a view using Pomelo, and if so, what the syntax is? I tried the "table" syntax with a view but it didnt work:
dotnet ef dbcontext scaffold "Server=myserver.com;Database=myDatabase;User=userame;Password=password;" "Pomelo.EntityFrameworkCore.MySql" -t personsView -o models
It runs, but it only generates a dbContext - it doesn't generate the model.
I'm using Pomelo 2.1.1 and Visual Studio 2017 (15.7.5). My project is a .Net Core 2.1 Web API. On the back end, I have MySQL Server 5.6.30.
Using Pomelo, you can use the following command (within the Package Manager Console) to generate the models as well as the context class:
Scaffold-DbContext [CONNECTION_STRING] Pomelo.EntityFrameworkCore.MySql -OutputDir [OUTPUT DIRECTORY] -Context [NAME OF CONTEXT CLASS] -f

Adding Spark CSV dependency to Zeppelin

I'm running an EMR with a spark cluster on AWS.
Spark version is 1.6
When running the folllowing command:
proxy = sqlContext.read.load("/user/zeppelin/ProxyRaw.csv",
format="com.databricks.spark.csv",
header="true",
inferSchema="true")
I get the following error:
Py4JJavaError: An error occurred while calling o162.load.
: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at
http://spark-packages.org
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:77)
How can I solve this? I assume I should add a package but how do I install it and where?
There is many way to add packages in Zeppelin :
One of them is to actually change the conf/zeppelin-env.sh configuration file adding the packages you need e.g com.databricks:spark-csv_2.10:1.4.0 in your case to the submit options since Zeppelin uses the spark-submit command under the hood :
export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.4.0"
But let's say that you don't have actually access to those configuration. You can then use Dynamic Dependency Loading via %dep interpreter (deprecated) :
%dep
z.load("com.databricks:spark-csv_2.10:1.4.0")
This will require that you load the dependencies before launching or restarting the interpreter.
Another way to do it is do add the dependency you need via the interpreter dependency manager as described in the following link : Dependency Management for Interpreter.
Well,
First you need to download the CSV liv from Maven repository:
https://mvnrepository.com/artifact/com.databricks/spark-csv_2.10/1.5.0
Check the scala version that you are using. If is 2.10 or 2.11.
When you call spark-shell our spark-submit or pyspark. Or even your Zeppelin you need to add the option --jars and the path to your lib.
Like this:
pyspark --jars /path/to/jar/spark-csv_2.10-1.5.0.jar
Than you can call it as you did above.
You can see other close issue here: How to add third party java jars for use in pyspark

how to deploy jbpm5.2 on tomcat 7 using mysql database?

Can anybody please help me with a link or document to run jbpm-5.2 on tomcat 7 using mysql as database ? Does jbpm deployment to tomcat require some other tool or repository or something else ? I am a complete novice to jbpm... Please help me.
Thanks in advance...
I had same problem and have posted a solution here (there is an ANT script referenced which can be downloaded):
http://ironclaws.wordpress.com/2012/06/18/jbpm-5-2-tomcat-7-mysql-ant-script-18-2/
To summarize what is required in order to install jbpm 5.2 onto tomcat 7 using the final full distribution:
Bitronix Transaction Manager : The distribution does not attempt to deploy / configure this despite the requirement. The above Ant installer will install Bitronix at the Tomcat Server level – advantageous as at this point BTM may be integrated into other projects on the platform.
The jBPM distribution is designed for JBoss AS server and the console / console-server packages include EL (expression language) libraries. These will conflict with those which are installed with Tomcat – the Tomcat EL should be preserved.
There is some confusion with respect to javassist jar in the config / config-server distributions.
The included jars should be dropped – and included into both config / config-server javassist-3.4.GA
The reason for this dependency is that Hibernate 3.4 is deployed as persistence layer in this distribution.
There is a conflicting dom4j library – dom4j-1.6.jar deployed with the console / console-server packages. This should also be removed.
It is not mentioned clearly that the parameter:
-Dreporting.needcontext=true needs to be passed to the Tomcat JVM in order to allow the gtw-server to correctly instantiate the report module loader.
For the ‘Demo’ it is important to configure the human-server persistence correctly, and additionally address the includeantruntime issue during starting of this service.

Jruby log4j integration

I am currently working on integrating Java application General Architecture For Text Engineering (GATE) with a Rails application using JRuby architecture. When we worked on integrating JRuby with log4j, I am getting following error:
0 [main] DEBUG Main.class - Hello world
gate/Gate.java:80:in `<clinit>': java.lang.NoClassDefFoundError: org/apache/log4
j/Logger (NativeException)
from gateapp/Main.java:86:in `main'
from test.rb:12
test.rb is the name of ruby program.
I tried importing all the log4j apache libraries, and included the class file in the test.rb file.
When I run the Java program alone its running fine. But when I generate the jar file and include them in Ruby file (test.rb) , I am getting this error
java.lang.NoClassDefFoundError: org/apache/log4j/Logger (NativeException) problem is occuring. How to deal with this problem ?
You need to make sure the log4j JAR file is in your classpath. One way to do this is to set the CLASSPATH variable in your environment. Another way would be to call require in your ruby code like
require "/some/path/MyStuff.jar"
Here is my config to set it up with couchbase Java SDK
include Java
def setup_log4j
java::lang.System.setProperty("net.spy.log.LoggerImpl", "net.spy.memcached.compat.log.Log4JLogger")
fa = Java::OrgApacheLog4j::FileAppender.new();
fa.setName("FileLogger");
fa.setFile("./log/#{Rails.env}.log");
fa.setLayout(Java::OrgApacheLog4j::PatternLayout.new("%d %-5p [%c{1}] %m%n"));
fa.setThreshold(Java::OrgApacheLog4j::Level::INFO);
fa.setAppend(true);
fa.activateOptions();
Java::OrgApacheLog4j::Logger.getRootLogger().addAppender(fa)
end
Just beware that I required the lo4j.jar file earlier.
Worth to mention that there is project named log4jruby.