Custom Delimiter in SQL Server Management Studio does not work - sql-server-2008

I'm trying to change my delimiter to a pipe:
I have restarted SQL Server Management Studio. However, I keep having the csv file with the previous delimiter.
Is it necessary to do something else?

You are setting custom delimiter for Results to Text option. But, you are trying to save results in the Results to Grid option.
If you are having Results to Text option (Shortcut Ctrl + T), you will get the result properly. You can copy and save the result accordingly.
update:
To turn off rowcount
SET NOROWCOUNT ON -- Add this setting before SELECT query
SELECT ...
Or in the
Tools > Options > Query Execution > SQL Server > Advanced > check SET NOROWCOUNT
To turn off completion time
Tools > Options > Query Execution > SQL Server > Advanced. Uncheck the
Show Completion Time checkbox

Related

Get last cube processed date in SSIS

I need to get last processed date of SSAS cube in SSIS and save it into a variable.
I've tried a "Execute SQL task":
SELECT LAST_DATA_UPDATE as LAST_DT FROM $system.mdschema_cubes
WHERE CUBE_NAME = 'CubeName'
It works ok in MSSQL management studio MDX query window but in SSIS it says: Unsupported data type on result set binding.
Then I've tried:
WITH MEMBER [Measures].[LastProcessed] AS ASSP.GetCubeLastProcessedDate() SELECT [Measures].[LastProcessed] ON 0 FROM [CubeName]
And it says '[ASSP].[GetCubeLastProcessedDate]' function does not exist.
Any ideas how to do this?
Thank you
A linked server might be your best option;
Create the linked server with the following, changing as appropriate:
EXEC master.dbo.sp_addlinkedserver
#server = N'LINKED_SERVER_OLAP_TEST', --Change to a suitable name
#srvproduct='', --Creates the productname as blank
#provider=N'MSOLAP', --Analysis Services
#datasrc=N'localhost', --Change to your datasource
#catalog=N'TESTCUBE' --Change to set the default cube
Change the data source of your Execute SQL Task to make sure it is pointing to any of the databases where the linked server is hosted, I.E. don't use an analysis service datasource use a standard OLE DB. Then have the following in your execute SQL task (Changing as appropriate).
SELECT *
FROM OpenQuery(LINKED_SERVER_OLAP_TEST,'SELECT LAST_DATA_UPDATE as LAST_DT FROM $system.mdschema_cubes
WHERE CUBE_NAME = ''CUBENAME''')
Set the variable to be DATETIME and the result set to be single row.
There may well be other ways to do this, however I have always found this method the most straight forward.

Data Flow Task - Using a parameter in a SQL Command in OLE DB Source

I have the following SQL Command in an OLE DB Source, in a data flow task:
Select Top 5000 *
From ProcessHistory.ScribeDeadZone1
Where ScribeDeadZoneId > ?
At the moment this works fine. However, I'd like to replace the 5000 with a variable. I can't seem to get the syntax right because everything I try results in an error when I click the Parameters button to set the parameter. I've tried
Select Top ? * and Select Top (?) *. Is this possible to do?
You need to build the SQL statement in a variable and then use the SQL Command from Variable option when defining the data source.

How do you execute a query in Sequel Pro?

I want to execute a query in a MySQL DB using Sequel Pro, but I do not see a Run button.
How do I execute my query?
Use ⌘+R to execute the selected Query.
Alternatively, use the dropdown that appears at the bottom right of the query editor and select Run Current or Run Previous depending on where your text cursor is.
Based on Keyboard Shortcuts:
Run all queries ⌥ ⌘ R
Run current query or selection ⌅ or ⌘ R
Use the drop down button on the right side, underneath the textarea.
Should have the following options:
Run Current Query
Run All Queries
Came here to find ⌘ + return to execute a query (like MySQL Workbench).
Found that I can map using mac Key bindings with the names of Run Current Query and Run Previous Query.
If you want to do it via terminal it would be (untested):
defaults write com.sequelpro.SequelPro NSUserKeyEquivalents '{
"Run Current Query" = "#\\U21a9";
"Run Previous Query" = "#\\U21a9";
}'
Note: you may have to restart the app.

Pasting SQL into the MySQL command line

I have an application that is defining some SQL code:
mySql = "SELECT
sq.question,
qs.title,
sq.id as question_id,
sq.type,
qs.id as option_id,
sri.title as rankTitle,
sri.id as rankId,
sfi.title as formTitle,
sfi.id as formId,
sq.sub_type,
sq.sort_order
FROM survey_questions as sq
LEFT JOIN question_suboptions as qs
ON sq.id = qs.question_id
LEFT JOIN survey_rankingitems as sri
ON sq.id = sri.question_id
LEFT JOIN survey_formitems as sfi
ON sq.id = sfi.question_id
WHERE sq.survey_id = #{#surveyId}
ORDER BY sq.sort_order"
I would like to paste this code (everything between the double quotes) in the MySQL command line, change the one parameter and execute it, but I have run into an issue where for every line above MySQL will display:
Display all 1450 possibilities? (y or n)
And then 1450 different available commands. If I remove all linebreaks and tabs then I can paste in, but that is time consuming and a pain. Is there a way that I can simply paste in the above code, edit it and then execute it as a single unit?
This is the default mysql (CLI) behavior each time the user presses the Tab key (mysql uses the underlying readline or EditLine libraries (not on Windows)).
By default, when the user requests to use a database, mysql reads tables and fields definitions. Then, pressing the Tab key makes mysql conveniently offers completion of the current input with the known tables and fields.
However, pasting some text into mysql that contains TAB characters (\t or 0x09) triggers the same behavior - even though no Tab key was actually pressed from the keyboard. And this can be annoying.
Two options given to mysql can prevent that behavior, though. My favorite is --disable-auto-rehash. The other one is --quiet or -q.
--disable-auto-rehash to prevent database, table, and column name completion (which are not read from the database, use the rehash command if later on you need completion). Commands history is kept, though (retrieved via the ↑ and ↓ keys for instance). Which is convenient.
--quick or -q which makes mysql not using the history file and no completion (does not read the database definitions).
On Linux one may add an alias in .bashrc to use --disable-auto-rehash automatically
alias mysql2='mysql --disable-auto-rehash'
Perhaps you could save the statement to a text file myTest.sql, then use the MySQL command source myTest.sql to run it? You could then tweak the SQL in the file, save the changes, and run it again.
You need to remove the line breaks and tabs. The double tab is causing it to display the Display all 1450 possibilities? (y or n) and the line breaks are causing it to execute early.
If it's PHP, write a little script to strip it for you:
echo (preg_replace("/\s+/", " ", $string));
Or something similar for other languages.
Breaking not so bad's answer explained the cause of this problem really well.
From the question:
If I remove all linebreaks and tabs then I can paste in, but that is time consuming and a pain.
In my case, I just replaced the tabs with spaces and I was able to paste the query just fine. The MySQL console doesn't seem to care about the newlines, just the tabs.
As a way to prevent this, most editors have a setting that will insert tabs instead of spaces when you press the Tab key. I normally have my IDEs configured this way, but in this instance it was a query I'd copied from MySQL workbench. Conveniently, it also has a setting to use spaces instead of tabs:
Edit > Preferences > General Editors > check Tab key inserts spaces instead of tabs > OK

import database dump to mysql using visual foxpro

I used leaves stru2mysql.prg and vfp2mysql_upload.prg to create a .sql dump file from DBF's. I connect to mysql database from vfp using ODBC.I KNOW how upload the sql dump file but i need to automate the whole process i.e after creating the dump file,my visual foxpro program can upload the dump file without a third party(automatically). I thought of using the source command but that needs to be run in mysql prompt.The assumption here is that my end users dont know how to import(which most of them dont).Please advice on how i can automate importation of sql file to mysql database.thank you
I think what you are looking for are the various SQL* functions in Foxpro. See the VFP help or MSDN on SQLCONNECT (or SQLSTRINGCONNECT), SQLEXEC, and SQLDISCONNECT functions to get you started. Microsoft provided good examples on each in the documentation.
You may also want to use FILETOSTR to get the output from Leafe's programs into a string for the SQLEXEC function.
Here's the steps I use to take data from a Visual FoxPro Database and upload to a MySql Database. These are all put into a custom method on a form, which is fired by a command button. For example the method would be 'uploadnewdata' and I pass parameters for whichever data tables I need
1) Connect to the Server - I use MySql ODBC
2) Validate the user (this uses a SQLEXEC to pull the correct matching record for a users tables
IF M.WorkingDatabase<>-1
nRetVal=SQLEXEC(m.WorkingDatabase,"SELECT * FROM users", "csrUsersOnServer")
SELECT csrUsersOnServer
SELECT userid,FROM csrUsersOnServer;
WHERE ALLTRIM(UPPER(userid))=ALLTRIM(UPPER(lcRanchUser));
AND ALLTRIM(UPPER(lcPassWord))=ALLTRIM(UPPER(lchPassWord));
INTO CURSOR ValidUsers
IF _TALLY>=1
ELSE
=MESSAGEBOX("Your Premise ID Does Not Match Any Records On The Server","System Message")
RETURN 0
ENDIF
ELSE
=MESSAGEBOX("Unable To Connect To Your Database", "System Message")
RETURN 0
ENDIF
3) Once that is successful I create my base cursor (this is the one I'm sending from)
4) I then loop through that cursor creating variable for the values in the fields
5) then using the SQLEXEC, and INSERT INTO, I update each record
6) once the program is finished processing the cursor, it generates a messagebox with the 'finished' message and control returns to the form.
All the user has to do, is select the starting table and enter their login information