SQLCompare generate only diff script - sql-server-2008

Is it possible to generate only diff script using SQLCompare from Red Gate SQl Compare?
In our database sync scenario we will use SQLCompare to generate diff script and will use Tarantino for applying scripts. I've played a little with sqlcompare but not found a way to generate only diff script, without sync databases.
Thanks

you should install this patch

The obvious answer with SQL Compare command-line is to use the /sf: argument to specify an output file and do not specify /sync. But it appears you are possibly running into some other issue which is not described here.

Related

How to automate data extraction from Elasticsearch Dev Tools?

I have to do the following steps two or three times a day
log in into Elasticsearch
Go to Dev Tools
Run a specific query by selecting it and pressing ctrl + enter
Query that I have to run
Select the results that returns in the "buckets" and copy it.
The yellow markdown in the image is what I have to select and copy
Then I go to https://www.convertcsv.com/json-to-csv.htm and paste the results so it converts to CSV.
Where I have to paste the results.
I can then download the CSV and then import it into google sheets so I can view the results in a Looker Dashboard.
Button to download the converted CSV.
This take me some time everyday and I would like if there is any way that I could automate such routine.
Maybe some ETL tool that can perform at least part of the process or may some more specific way to do it with python.
Thanks in advance.
I don't have much experience with what I want to do and I tried to search online similar issues, but couldn't really find anything useful.
I don't know you tried, but there is a reporting tool on elasticsearch inside the "Stack Management > Reporting". On the other side, there are another tools which you can work from a server with crontab. Here are some of them :
A little bit old but I think it can work for you. ES2CSV. YOu can check there are examples inside the docs folder. YOu can send queries via file and report to CSV.
Another option which is my preference too. You can use pandas library of python. YOu can write a script according to this article, and you can get a csv export CSV. The article I mentioned is really explained in a great way.
Another alternative a library written in Java. But the documentation is a little bit weak.
Another alternative for python library can be elasticsearch-tocsv. This one is a little bit recently updated when I compare it to first alternative. But the query samples are a little bit weak. But there is a detailed article. You can check it.
You can use elasticdump, which is written on NodeJS and a great tool to report data from elasticsearch. And there is a CSV export option. You can see examples on GitHub page.
I will try to find more and I will update this answer time by time. Thanks!

How to get value from hbase and put it into a variable?

This is probably a noob question, so I apologize in advance.
The HBase console, as far as I understand, is an extension (or a script running over) JIRB. Also, it comes with several HBase-specific commands, one of which is 'get' - to retrieve columns\values from a table.
However, it seems like 'get' only writes to screen and doesn't output values at all.
Is there any native hbase console command which will allow me to retrieve a value (e.g. a set of rows\columns), put them into a variable and retrieve their values?
Thanks
No, there is not a native console command in 0.92. If you dig into the source code, there is a class Hbase::Table that could be used to do what you want. I believe this is going to be more exposed in 0.96. At this point, I have resorted to adding my own Ruby to my shell to handle a variety of common tasks (like using SingleColumnValueFilters on scans).

Ways to log MySQL Diff?

I'm working on a project right now that required me to use a CMS that makes multiple changes to a database, I'll need those changes later in order to create a post install configuration file to reuse those changes. I know that there are lots of Windows based programs that will show you MySQL Diffs, but what about Linux? I would like the ability to keep an appending log of my changes so I know what exactly is going on under the hood.
The ideal scenario would be that I can capture a post and current state, compare them, and aggregate the output. Does anyone know a way to do this?
If these are the only changes made to your database then one way to do this is to enable the binary log, and use that as your change log. You can convert it to a SQL script using the mysqlbinlog tool.

mysql udf read my.cnf

I'm trying to write a MySQL UDF (User Definied Function), which should read the configuration file of MySQL - my.cnf -, or access MySQL session and status vars.
How do I do that ?
I'm sure, there are functions implemented in MySQL source code - somewhere ... for this functionality.
How do I find them?
Also, is there a good MySQL source API documentation?
Thanks,
krisy
The easiest solution I found, was starting MySQL from a script, which script contains commands, to set enviromental variables, and accesss these variables througg getenv() function from UDF.
If anyone has a better solution, I'm interested very much :-)

How to synchronize development and production database

Do you know any applications to synchronize two databases - during development sometimes it's required to add one or two table rows or new table or column.
Usually I write every sql statement in some file and during uploading path I evecute those lines on my production database (earlier backing it up).
I work with mySQL and postreSQL databases.
What is your practise and what applications helps you in that.
You asked for a tool or application answer, but what you really need is a a process answer. The underlying theme here is that you should be versioning your database DDL (and DML, when needed) and providing change scripts to be able to update any version of your database to a higher version.
This set of links provided by Jeff Atwood and written by K. Scott Allen explain in detail what this ought to look like - and they do it better than I can possibly write up here: http://www.codinghorror.com/blog/2008/02/get-your-database-under-version-control.html
For PostgreSQL you could use Another PostgreSQL Diff Tool . It can diff two SQL Dumps very fast (a few seconds on a db with about 300 tables, 50 views and 500 stored procedures). So you can find your changes easily and get a sql diff which you can execute.
From the APGDiff Page:
Another PostgreSQL Diff Tool is simple PostgreSQL diff tool that is useful for schema upgrades. The tool compares two schema dump files and creates output file that is (after some hand-made modifications) suitable for upgrade of old schema.
Have scripts (under source control of course) that you only ever add to the bottom off. That combined with regular restores from your production database to dev you should be golden. If you are strict about it, this works very well.
Otherwise I know lots of people use redgate stuff for SQLServer.
Another vote for RedGate SQL Compare
http://www.red-gate.com/products/SQL_Compare/index.htm
Wouldn't want to live without it!
Edit: Sorry, it seems this is only for SQL Server. Still - if any SQL Server users have the same question I'd definitely recommend this tool.
If you write your SQL statements for your development database (which are, I imagine, series of DDL instructions such as CREATE, ALTER and DROP), why don't you keep track of them by recording them in a table, with a "version" index? You will then be able to:
track your version changes
make a small routine allowing the "automatic" update of your production database by sending the recorded instructions to the database.
I really like the EMS tools.
There tools are available for all popular DB's and you have the same user experience for every type of DB.
One of the tools is the DB Comparer.
TOAD
saved many an ass several times in the past. Why do people run sql with no exit strategy?
the redgate one is good also.
Siebel (CRM, Sales, etc. management product) has a built-in tool to align the production database with the development one (dev2prod).
Otherwise, you've got to stick with manually executed scripts.
Navicat has a structure synchronisation wizard that handles this.
I solve this by using Hibernate. It can detect and autocreate missing tables, columns, etc.
You could add some automation to your current way of doing things by using dbDeploy or a similar script. This will allow you to keep track of your schema changes and to upgrade/rollback your schema as you see fit.
Here's a straight linux bash script I wrote for syncing Magento databases... but you can easily modify it for other uses :)
http://markshust.com/2011/09/08/syncing-magento-instance-production-development
DBV - "Database version control, made easy!" (PHP)