I have read through the documentation but can't find answer for the following questions:
I would prefer to setup an already running spark cluster (i.e. add a jar to be able to use SnappyContext), or is it mandatory to use bundled spark? If possible, please assist: SPARK_HOME seems to be set on runtime by the launchers
Where to define JAVA_HOME?. For now I did it in bin/spark-class on all snappy server nodes
Build SnappyData with scala 2.11
Appreciated,
Saif
Right now we don't have support for running Snappy with stock Spark, but we are working towards it. Right now you can use Snappy version of Spark.
For Q2, you can set JAVA_HOME in the command line, before starting Snappy servers.
We have not tested Snappy with scala 2.11. So not sure what issues may arise.
Related
I'm currently developing a Python application and I would like to know if there are any ways to pack MongoDB and MySQL (or Postgresql) into the application. By packing I mean taking those application binaries and distribute them with the application files.
For example, Metasploit PRO has some applications like nginx, postgresql, java, ruby, etc... under /opt/metasploit (they come with the application setup), and I would like to know if that could be done with any Linux application. And if so, how could I "choose" what binaries are needed? Would they work for any Debian distro? Can any application follow that procedure? Could it be done with MySQL and MongoDB?
P.D: I would like to do this to distribute one unique application instead of having to "obligate" the user to setup the databases independently, and for pure curiosity.
Thank you very much in advance!
MongoDB already distributes its binaries as standalone binaries in the sense that everything needed for the database (or shell tools) to run is included in the respective binary (mongo/mongos/mongod).
However, these binaries are OS (Linux distribution)-specific. Meaning, for example, they dynamically link against libssl and libcurl and you need to have the right versions of those libraries on the host system. So, for example, a MongoDB binary for Ubuntu 14.04 is likely to not work on Ubuntu 16.04.
As far as I know MongoDB does not support building for "generic linux". Only specific OSes like Ubuntu 16.04 are supported.
With that said, you could possibly build a "portable" MongoDB yourself if you accept some limitations, since its source code is available:
You need to figure out how to build MongoDB on some linux distribution that gives you the baseline glibc that would be compatible with all of your targets.
You may have to forego functionality like TLS connections, or figure out how to link against openssl statically (probably non-trivial).
This would be easier with older MongoDB versions (4.0, 3.6) since they have fewer system dependencies.
I think you can pack the required services and your application as Docker image or Virtual Machine box.
As my experience, I used to package the MongoDB and other Linux CLI tools with my NodeJS web application into a VM box using Vagrant. Or you can use Docker if you prefer container-based application.
If you use Vagrant, the provisioning feature may help you to setup the database before running the application. Check https://www.vagrantup.com/docs/provisioning
How to exclude the org.json version in MULE 3.9.0 buildpath?
The version is json-20140107 I want to use the latest json version. I tried to exclude in mule-commons but it did not do any help.
The json-20140107.jar library is distributed with Mule 3.9.0. Because of how Mule implements classloading, at execution time classes loaded from that jar file will override a newer version in your application. You must not change the version provided because Mule was tested. Changing any provided library in the distribution can cause unexpected errors.
You could pack a new version of the library and try to use Fine Grain Classloader Control however that seems to be an Enterprise Edition feature not available in the community edition. If you have the Enterprise Edition it is highly recommended to use the last patch version (currently 3.9.4) instead of 3.9.0.
Another solution could be to migrate to Mule 4.x, which uses classloading isolation to avoid this kind of issues. You can use any version of libraries inside applications without conflicting with the provided libraries in the runtime. Again, using the last version available is the recommended way to go. Mule 3 applications are not compatible with Mule 4, so you will need to migrate existing applications.
I am trying to connect to AWS Athena via my Windows as well as Mac system. My goal is to have a SQL Editor that I can use to perform quick research about the data. I was trying to find tools and tutorials for connecting to Athena. So far I have only found some tutorials around SQL Workbench. What are some other tools that you guys leverage and is there something particular that you like about that tool and how easy was it to setup either on Windows/Mac.
I use SQuirreL SQL for connecting to Athena. It has served the purpose so far. Once you import the JDBC drivers(you can download them from AWS's site) the tool itself is pretty straight forward to setup. The URL that you can use to connect can be seen here -
jdbc:awsathena://AwsRegion=<AWS Region>;User=<AWS Access Key>;Password=<AWS Secret Key>;S3OutputLocation=<S3 folder>
I'm pasting the "Overview" of SQuirreL below:
SQuirreL SQL Client is a graphical Java program that will allow you to
view the structure of a JDBC compliant database, browse the data in
tables, issue SQL commands etc, see Introduction. The minimum version
of Java supported is 1.6.x as of SQuirreL version 3.0. See the Old
Versions page for versions of SQuirreL that will work with older
versions of Java.
SQuirreL's functionality can be extended through the use of plugins.
A short introduction can be found here. To see the change history
(including changes not yet released) click here.
For a more detailed introduction see the English or German of our
paper on SQuirreL.
Susan Cline graciously took the time to document the steps she
followed to setup an Apache Derby database from scratch and use the
SQuirreL SQL Client to explore it.
Quite some time ago Kulvir Singh Bhogal wrote a great tutorial on
SQuirreL and published it at the IBM developerWorks site. He has
kindly allowed us to mirror it locally. The tutorial is not really up
to date but especially for doing the first steps it is still of help.
SQuirrel was originally released under the GNU General Public License.
Since version 1.1beta2 it has been released under the GNU Lesser
General Public License.
Another tool that I have used pretty extensively is SQL Workbench. This is also sort of recommended on the AWS site. The is good, but I found that it would hang up sometimes and I would loose my work.
Both of these can be easily downloaded from the links provided or if you like to use CLI then Homebrew can be used on MacBook or Chocolatey on windows
Some other tools that you can use are DataGrip by JetBrains. Guide to setup the tool can be seen here. The functionality of DataGrip is also built into IntelliJ Ultimate Edition.
DB Visualizer is another tool that can be used to connect to AWS Athena the guide to connect can be found here
TeamSQL and Razor SQL are some other tools that you can leverage.
One of the strengths of JDBC drivers is that as long as a tool supports JDBC, you can use it for any data source which has a JDBC driver. First, get the JAR file for the JDBC driver for Athena here: Amazon Athena Connect with JDBC. Java works across platforms, so as long as you have Java in your Windows/Mac environment, you should have no problem using any of these tools.
The tool SQL Workbench/J is fairly popular, but I find it frustrating to work with when switching between multiple databases.
Another tool is Squirrel SQL, which also supports JDBC drivers. I prefer it, but it looks a little less pretty than SQL Workbench/J. Once you've downloaded the JDBC driver, configure it in SquirrelSQL by going to Drivers and then adding a new one. Label it "Amazon Athena" and specify the Example URL as jdbc:awsathena://AwsRegion=[Region];User=
[AccessKey];Password=[SecretKey];S3OutputLocation=[Output];
[Property1]=[Value1];[Property2]=[Value2];...
Leave the Website URL Blank, but specify the Class Name as com.simba.athena.jdbc.Driver. Add the .jar file of the JDBC driver to the "Extra Class Path" page.
Once you've set up the driver, you can set up connections by going to the Alias tab and hitting the plus sign. Simply fill in the values in the example URL to point to your data source. Once you're connected, you're good to start writing queries.
SquirrelSQL saves the connection information for you, allowing you to quickly jump between data sources, and makes it easy to write multiple queries in one input window, with their outputs going to separate tabs in the output pane. I've used it for database, exploration, DDL, and regular day-to-day tasks with data. It's been good for most anything I've connected it to. It is definitely not perfect, but it's getting better all the time.
I guess you need a Docker SQL Editor that you can use to perform quick research about the data.
But I suggest two ways.
One is Offline/Online and with installation methods, which you can use with a fixed connection.
The first solution is to select a system as a server and connect to it from other operating systems. This is the traditional / old solution.
In the second solution you just need to be trained to work with Docker. This is a newer and more popular solution.
if you want use MySQL in MAC read this article :
Installing MySQL in a Mac OS X environment
If you want use MySQL in windows read this article :
How to Install MySQL on Windows
But you need a synchronous space for use MySQL or other DBMS you can use docker.
Docker is very Flexible . But you need connect to internet.
If you want use Docker read this article and view docker site :
Docker : SITE
Docker Doc : Start a Remote MySQL Server with Docker quickly
I want to connect mysql database using dart sqljocky package but it's not compatible. dart analysis show error.
Resolving dependencies...
The current Dart SDK version is 2.0.0.
Because dartAuth depends on sqljocky >=0.1.3 which requires SDK version <2.0.0, version solving failed.
Because dartAuth depends on sqljocky >=0.1.3 which requires SDK version <2.0.0, version solving failed.
According to the error, the installed version of the Dart SDK is too new for the sqljocky version required by dartAuth.
You'll either need to downgrade to a pre-2.0.0 version of the Dart SDK or upgrade to a dartAuth version that doesn't require an outdated version of sqljocky (which hasn't been updated in three years).
It's not clear from your post, but it looks to me like you must be using a very old version of dartAuth if it's depending on sqljocky, so moving to a newer version of the former and replacing the latter with something that's actively maintained seem to be reasonable first steps.
I am creating a project using Django + Python 3.4. However, according to this question, the standard MySQL connector for Python does not support the language's third release (which I'm having a difficult time believing, but that's beside the point).
Ultimately, my question is whether or not it is justifiable to use a non-standard connector fork (such as that which is presented in the linked question above) over downgrading to Python 2.x. For example, are there any significant security issues with using the forked connector instead of waiting for an official release?
MySQL Connector/Python is not bundled with Django, but it is made by Oracle. So I'm not sure if it should be considered an unofficial fork.
I'm using it with an old mySQL database, python 3.4 and django 1.7. The only problem I've noticed is that sometimes the error messages are a bit wonky.
Installation is very easy:
pip install mysql-connector-python --allow-external mysql-connector-python
I have been using the mysql-connector-python connector for several months without issue. This is using Python 3.4.2 and Django-1.7.1.
It actually works much better than the git fork someone else did of the python 2 connector.
My opinion, don't downgrade to python 2 - This is a solid connector in my experience.