snappy-data examples not working in rowstore mode - snappydata

I am start to learning the snappy-data,and running the snappy-data examples as per the Documentation, while Start in snappy-data server like
$SNAPPY_HOME$ ./sbin/snappy-start-all.sh
./bin/run-example snappydata.JDBCExample
comment ,the snappy-data example are executing, at the same time when using snappy-data row-store its not working while starting snappy-data a like
" $SNAPPY_HOME$ ./sbin/./sbin/snappy-start-all.sh rowstore "
./bin/run-example snappydata.JDBCExample
like that i will execute the comment. could you any one knows how to run snappy examples in row-store, please share me, Thank you.. for the Reference snappy-data rows-tore Documentation
link..

The JDBCExample is written for SnappyData cluster and will not work for just rowstore mode cluster. Also to start a rowstore cluster correct command is-
/sbin/snappy-start.sh rowstore
Note there is no hyphen.
The JDBCExample uses SnappyData format DDL to create tables which will not work in pure rowstore mode.

Related

Writing many MySQL (in vsCode) queries in one file. How to avoid writting 'USE DBaseName" constantly?

I'm learning SQL / MySQL using WAMPSERVER and the MySql extension for vsCode, so that I can write, comment and keep the code in the vsCode editor, instead of sending volatile queries with the command line.
This morning I created a database by right-clicking a database / New Query:
After creating it this way, I was able to create code that I run without having to write USE DBaseName before the selected lines, as in the selected three lines you can see below:
Yet, after restarting my laptop at home, the same code will not run. It returns the usual
undefined
Error: ER_NO_DB_ERROR: No database selected
unless I write the USE statement, as in the last group (4 lines) of code. So I have to write that every time I want to try a query...
Why did it work this morning and not now? How can one run queries without having to constantly write USE DBaseName?
(links to explanations to further understand the underlying mechanism are also very welcome...)
EDIT: posting image of server in vsCode to answer comment:
I'm using a local server (pic below) which hasn't changed since this morning...

Is there a MySQL client that can run multiple queries in one tab?

Note that I'm not asking about how to send multiple SQL statements in one request to the DB server.
I've worked with SQL Server Management Studio for a while. And I liked how you can have a lot of SELECT statements in one .sql file. And if you pressed F5, it will run your SELECT statements independently and multiple result sets will be presented to you. Is there something like this for MySQL?
With all of the MySQL clients I've tried. One tab is limited to 1 SQL statement. You can write multiple statements, but you can only run only one of them at a time.
I tried SQuirel, HeidiSQL, MySQL Workbench CE, dbeaver. Right now I'm stuck with the free version of SQLYog. Each all have great features. But not one has this feature I was looking for.
Edit:
Thanks to Aishatter for suggesting Toad for MySQL. I made a screenshot of it showing the feature I was looking for:
imgur screenshot
It can even remember previous executions, this is the "Set 6" and "Set 7" as seen in the screenshot.
Edit2:
Toad was too slow for me. And so I ended up with SquirrelSQL which also has this feature I'm looking for, but note that I think this is only present in their latest snapshot build - 20150623_2101
Have you tried Toad for MySQL?
It is works a charm. Like a MySQL on a SQL Management Studio.
You can execute multiple queries at the same time. With a feature of query formatting.
Yes.
Here you go HEIDI MYSQL
http://www.heidisql.com/download.php
This is one of the finest tools I have been playing with since last 4 years.

Why is the init file not populating Memory Tables in MySQL?

For optimizing system performance, we are storing a few static tables on RAM (copies of which does exist on the hard-drive as well -- on the MyISAM). Now, as we all know, when the server re-starts all data on RAM gets deleted. Hence to avoid that we created an init file that has 4 SQL statements.
Please note that each SQL statement exists on a separate line, ended with a semi-colon (;) and there are no comments anywhere --- so from my limited knowledge, I believe that I have avoided making some basic mistakes. However, when I re-start MySQL manually from the command line to test it, I see that the memory tables are empty. There are no issues with the initfile itself, because when I execute the initfile manually from the command line, the data gets populated without any issues.
Any help in terms of resolving this will be much appreciated!
Thanks!
Udayan
Something is not right here.
Just to check, I tried restarting my local mysql server using /etc/init.d/mysql restart. And, it started up as running by the mysql user (not root).
So, we will need the following to try to figure this out because I am just about positive that the problem is either that the file has the wrong permissions or it is in a location that the user running mysqld does not have access to.
What version of Linux are you running?
What is the version of mySQL that you have installed?
Is 'init-file=' in the right section of my.cnf?
What is the output of 'ps -ef | grep mysqld'?
What is the output of 'ls -lrt /tmp/initfile.sql'?
What did you mean by 'There are no issues with the initfile itself,
because when I execute the initfile manually from the command line,
the data gets populated without any issues.'?
I cannot help but think that it is a permissions problem. So the fourth and fifth answers are the ones that I am most interested in.
You should add all of these answers to your question - so that people have everything they need to help you solve your problem.
Appreciate all your suggestions but I have figured out what the issues were.
In the SQL file I needed to mention which database that init-file should populate.
I had trailing semi-colons in the SQL statements -- apparently that is not a good idea.
Once I made these two small changes, everything started working fine.
Again, thanks for the pointers!
Udayan

Python3 MySQL Drivers

Recently, I switched to Python 3 (3.1 on a FreeBSD system), and i would like to work with MySQL databases.
First i tried to use pymysql3-0.4, but it failed when i used SUM in my query with this error:
, TypeError("Cannot convert b'46691486' to Decimal",))
Then i tried oursql-0.9.2, but it seems it has no unix socket support (the documentation write otherwise but it doesn't recognize the socket protocol.)
Last i decided to give a chance to mypysql-0.5.5 but the installation is failed.
Could you recommend me a properly working MySQL driver for Python 3, or at least solve one of these problems? I would be very greatfull.
The oursql documentation is a little tricky. :$ There is a list of Connection's parameters, but it doesn't contain the unix_socket parameter. If i set that and the the protocol parameter the whole thing is just work fine :)
If someone has trouble with inserting (get _statment charset AttributeError): https://bugs.launchpad.net/oursql/+bug/669184 change the lines in oursql.c with the code in the report, and rebuild it. (it will be fixed in 0.9.3)

migration from Progress DB to MySQL using linux

I am trying to replicate a Progress database to MySQL 5.1..now , I came across a few softwares and a few suggestions on stackoverflow as well as other websites which necessitate the involvement of a Software like Pro2SQL or other SQL migration tools like MySQL migration tool.But the problem that i am faced with is that I will be using Linux to run the mysql.i am working on linux.Is there a software for linux(I am using bash scripting to query the MYSQL database) or another other means?
Currently , I am using jdbc to connect and retrieve, but mapping the database is hard and may create flaws in the long run due to mapping problems.Also, this proccess will be repeated quiet often..for backup.
Since, MySQL migration tool is a good solution , but it doesnt support linux command prompt, so I have to implement in another better / optimized way..Please suggest what should be done further.Thanks a ton for the support..
If it is just about dumping :
If i get your problem the solution holds in two lines (If you are following SQL standards) :
pg_dump <yourdatabase>
mysql < <yourfile.sql>
With the first line you are dumping your database, many options exists whether you want to dump tables, content, schemas, etc... go to the man page for more details
With the second lines you are just loading them into your mysql.
If it is about mapping :
Take a look a Kettle, it's an Open Source ETL, it works really well on Linux and you can automize task using crontabs.
Hope I could help,