I am working on getting all indexes scripted from one environment from other.
I can get indexes names from below query by specifying bucket:
select RAW name from system:indexes where keyspace_id=‘namedDB’
Now my question is there any way that i can get Index definition using N1ql as in sql server we used to do
sp_helptext’Indexname’
It will show index definition.Is there any way in N1ql.If not how to extract definition of all indexes at once rather than going one by one.
Thanks
Ritz
There is no direct statement in the N1QL. You need to build the statement using system:indexes.
You Can try one of the following option
Run the following command on each index node
https://docs.couchbase.com/server/5.5/rest-api/get-statement-indexes.html
curl -v Administrator:password#127.0.0.1:9102/getIndexStatement
Use UI copy all definitions
Checkout cbbackupmgr https://docs.couchbase.com/server/5.5/backup-restore/cbbackupmgr-restore.html
Add whitelist described in Security section of https://docs.couchbase.com/server/5.5/n1ql/n1ql-language-reference/curl.html
SELECT RAW re
FROM CURL("http://Administrator:password#127.0.0.1:9102/getIndexStatement",{}) AS re ;
Related
Thanks for reading! I would like to define an external table on a storage account where the path format is as follows:
flowevents/resourceId=/SUBSCRIPTIONS/<unique>/RESOURCEGROUPS/<unique>/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/<unique>/y=2022/m=05/d=11/h=09/m=00/<unique>/datafiles
I would like to partition the external table by date. The relevant documentation for this is located here. My understanding and experimentation indicates that this might not be possible to do, given the URI path above where there are unique values before the values that I would like to partition on and the answer given by Slavik here.
Is it possible to create an external table using wildcards to traverse the folders to achieve the partition scheme described above?
Is the only way to solve this to define multiple storage connection strings for all possible values of unique? Is there an upper limit to how many values may be provided?
The path traversal functionality I'm looking for can be found in LightIngest:
-prefix:resourceId=/SUBSCRIPTIONS/00-00C-00-00-00/RESOURCEGROUPS/ -pattern:*/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/*/y=2021/m=11/d=10/*.json
It does not seem to be supported when defining external tables. A possible reason for this is that the engine will get overloaded if you load too many files from external storage. I got the following error message when I defined 50 connections strings:
Partial query failure: Input stream/record/field too large (E_INPUT_STREAM_TOO_LARGE). (message: '', details: '')
It works as intended when I provided 30 connection strings and used four virtual columns to do partitioning. This error message is not described in the documentation, by the way.
Update, for kusto developers: I attempted to use virtual columns for the whole URI path and then query to generate the connection string. I verified that the table definition is correct using:
.show external table X articats limit 1
It would show the partitions with populated values. However, when attempting to query the external table using the recommended operators ("in" or "has") to navigate it does not work, the query goes on forever despite fetching a small file and running on a cluster on D14_v2 VMs. If I were to define an external table just for that file, it would load just fine.
I'm generating a script from an existing MySQL schema using DataGrip's SQL Generator feature. I obtain a working script containing create index statements. I would prefer the indexes to be created by a key clause in the create table statement. I can't see an option in SQL Generator to get that. Do I miss something? I have dozens of tables, so I can't just do it by hand.
The server is a MySQL 5.7.
You can use SQL Generator | Generate: Definitions provided by RDBMS server to get the same result
I found a solution using not the SQL Generator, which doesn't seem to be able to do what I want, but a raw export of the database structure. I select the schema (you can select various and multiple objects: schemas, tables, triggers, produres, functions), on right-click: SQL Scripts -> Request and Copy original DDL, which copies the resulting script extracted from the database. You can then paste it wherever you want, for example a SQL console or a text editor.
I am trying to match two MySQL Queries (for now, the target is "Create VIEW") to analyze if the result of execution would result in the same effect to Database.
The source of the queries is not the same, making the syntax across the queries inconsistent.
To further simplify the question, let me add more details:
Let's say there is an already existing View in the database.
This View was created using a Create VIEW ... SQL statement.
There is a possibility that the Create VIEW ... statement get's updated, hence to reflect the changes in the database currently this statement is executed at the time of migration.
But, I want to avoid this situation, if the statement Create VIEW ... will result in the same structure as of the existing View in the database, I want to avoid executing it.
To generate the CREATE VIEW from database I am using SHOW CREATE VIEW... (comparing this with the query originally used to create the VIEW).
The primary restriction is I need to make this decision only at the time of migration and cannot presume any conclusions (say, using git diff or commit history...).
I have already done some search to look for a solution for this:
Found no direct solution for this problem (like a SQL engine to which I can feed both queries and know if the result would be the same).
Decided to Parse the queries and to achieve that ended up looking into ANTLR (also used by MYSQL WorkBench)
ANTLR's approach looks promising but, this will require an extensive rule-based parsing and creating a query match program from scratch.
I realized that just parsing queries is not enough, I have to create my own POJOs to store the atomic lexers from queries and then compare the queries based on some rules.
Even if I could find predefined POJOs, that would allow to quickly create a solution for this problem.
I managed to connect Drill and PostgreSQL but even for a simple command like show tables I am receiving:
org.apache.drill.common.exceptions.UserException: VALIDATION ERROR: Multiple entries with same key: campaign_items=JdbcTable {campaign_items} and campaign_items=JdbcTable {campaign_items}
I have two schemas public and fdw which contains the same table name campaign_items. How can I force Drill to use the fully qualified name to avoid confusion? Any other suggestions?
To use show tables, you need to select the schema first:
First issue the USE command to identify the schema for which you want to view tables or views. For example, the following USE statement tells Drill that you only want information from the dfs.myviews schema:
USE dfs.myviews;
https://drill.apache.org/docs/show-tables/
I want to search for http://example.com and replace with https://example.com.
I know I can target a specific table and column with this approach:
UPDATE table_name SET post_content = REPLACE(column_name, 'http://example.com', 'https://example.com');
But how do I run a query which targets all tables/columns: the entire database?
Do a DB dump and open it as a text file. Find and replace. Save and then re-import.
As far as I know, I don't think you can use REPLACE on all tables in one query.
There a two ways to do it. The first is to create SQL UPDATE via the information_schema and execute it as prepared statement. this is much work.
you must look at each column if you can do a replace, so you must ignore INTS and ENUMs etc.
The second way is not a real SQL change, but it works: Generate a full SQL-Dump from your database and make the changes in this file via editor or via commandline with AWK or SED. After this you can import the changed file