Weird problems with mysql outfile under FreeBSD - mysql

(See my answer below. Leaving this up in case it helps someone else.)
What follows is a series of attempts to dump a query to an outfile on a new FreeBSD box that my site has moved to. The results are the same if I log in as me or if I log in as root. I hope the style isn't too annoying. I have my comments commented out around the actual code and output.
// try to dump query to my home dir
SELECT pmr.datetime_requested,
nfo.postal_code
FROM
print_mailing_request pmr,
personal_info nfo
WHERE
nfo.person = pmr.person AND
pmr.datetime_requested >= "2010-01-01 00:00:00" AND
(pmr.print_mailing = 31 OR pmr.print_mailing = 30)
ORDER BY pmr.datetime_requested INTO OUTFILE '/usr/home/david/x';
ERROR 1 (HY000): Can't create/write to file '/usr/home/david/x' (Errcode: 2)
// tried creating file first with touch and even chmod 077 file
// but same error each time
// OK, lets try /tmp
SELECT pmr.datetime_requested,
nfo.postal_code
FROM
print_mailing_request pmr,
personal_info nfo
WHERE
nfo.person = pmr.person AND
pmr.datetime_requested >= "2010-01-01 00:00:00" AND
(pmr.print_mailing = 31 OR pmr.print_mailing = 30)
ORDER BY pmr.datetime_requested INTO OUTFILE '/tmp/x';
Query OK, 24654 rows affected (0.78 sec)
// so let's look at the file
less /tmp/x
/tmp/x: No such file or directory
// Log back into mysql and try same query again
ERROR 1086 (HY000): File '/tmp/x' already exists
ls /tmp
20100325180233.gtg2010.csv 20100330094652.gtg2010.csv
20100325180448.gtg2010.csv 2010_Q1_UNO.csv
20100325181446.gtg2010.csv 4724.csv
20100325181927.gtg2010.csv aprbUfvxp
20100326003002.gtg2010.csv dave.txt
20100327003002.gtg2010.csv etr.xml
20100328003002.gtg2010.csv mysql.sock
20100329003003.gtg2010.csv
// No file x.
// If I run query with no INTO OUTFILE I see 24000+ rows of
| 2010-04-04 13:27:09 | 33156 |
| 2010-04-04 13:27:10 | 33156 |
| 2010-04-04 13:30:04 | NE38 8SR |
| 2010-04-04 14:27:03 | 00901 |
| 2010-04-04 14:37:04 | 75001 |
| 2010-04-04 14:53:05 | 78640 |
| 2010-04-04 15:15:03 | 07410 |
| 2010-04-04 15:27:04 | 43235 |
// So I know it isn't the query...
// Advice?

Doh! When I log into mysql on this machine my connection string has an IP address in it. /tmp as far as mysql is concerned is not on the machine I am logged into...
so I solved problem by using mysql -e eg:
mysql -h my.db.com -u usrname--password=pass db_name -e 'SELECT foo FROM bar' > /tmp/myfile.txt

Related

what does # symbol do in this [#][] ansible code?

I would like to know what # symbol does in this line of code and [#][] do? This is being used in Ansible. Thank you.
json_query("response.result.job | [#][]")
The whole code:
- name: task1
<removed for simplicity>
cmd: 'show jobs all'
register: all_jobs
until: |
all_jobs is not failed
and (all_jobs.stdout | from_json | json_query("response.result.job|[#][]") | default([], true) | length > 0)
and (all_jobs.stdout | from_json | json_query("response.result.job|[#][]")
| json_query("[?status != 'FIN']") | length == 0)
retries: 60
delay: 30
For those who are curious, I think I found the answer. It is all about jmespath. # is the current node. [] will flat a list. [][] will flat nested lists. [#][] will flat second level list. To understand this please go to this link below. There are examples there.
https://jmespath.org/tutorial.html

Loading quoted numbers into snowflake table from CSV with COPY TO <TABLE>

I have a problem with loading CSV data into snowflake table. Fields are wrapped in double quote marks and hence there is problem with importing them into table.
I know that COPY TO has CSV specific option FIELD_OPTIONALLY_ENCLOSED_BY = '"'but it's not working at all.
Here are some pices of table definition and copy command:
CREATE TABLE ...
(
GamePlayId NUMBER NOT NULL,
etc...
....);
COPY INTO ...
FROM ...csv.gz'
FILE_FORMAT = (TYPE = CSV
STRIP_NULL_VALUES = TRUE
FIELD_DELIMITER = ','
SKIP_HEADER = 1
error_on_column_count_mismatch=false
FIELD_OPTIONALLY_ENCLOSED_BY = '"'
)
ON_ERROR = "ABORT_STATEMENT"
;
Csv file looks like this:
"3922000","14733370","57256","2","3","2","2","2019-05-23 14:14:44",",00000000",",00000000",",00000000",",00000000","1000,00000000","1000,00000000","1317,50400000","1166,50000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000"
I get an error
'''Numeric value '"3922000"' is not recognized '''
I'm pretty sure it's because NUMBER value is interpreted as string when snowflake is reading "" marks, but since I use
FIELD_OPTIONALLY_ENCLOSED_BY = '"'
it shouldn't even be there... Does anyone have some solution to this?
Maybe something is incorrect with your file? I was just able to run the following without issue.
1. create the test table:
CREATE OR REPLACE TABLE
dbNameHere.schemaNameHere.stacko_58322339 (
num1 NUMBER,
num2 NUMBER,
num3 NUMBER);
2. create test file, contents as follows
1,2,3
"3922000","14733370","57256"
3,"2",1
4,5,"6"
3. create stage and put file in stage
4. run the following copy command
COPY INTO dbNameHere.schemaNameHere.STACKO_58322339
FROM #stageNameHere/stacko_58322339.csv.gz
FILE_FORMAT = (TYPE = CSV
STRIP_NULL_VALUES = TRUE
FIELD_DELIMITER = ','
SKIP_HEADER = 0
ERROR_ON_COLUMN_COUNT_MISMATCH=FALSE
FIELD_OPTIONALLY_ENCLOSED_BY = '"'
)
ON_ERROR = "CONTINUE";
4. results
+-----------------------------------------------------+--------+-------------+-------------+-------------+-------------+-------------+------------------+-----------------------+-------------------------+
| file | status | rows_parsed | rows_loaded | error_limit | errors_seen | first_error | first_error_line | first_error_character | first_error_column_name |
|-----------------------------------------------------+--------+-------------+-------------+-------------+-------------+-------------+------------------+-----------------------+-------------------------|
| stageNameHere/stacko_58322339.csv.gz | LOADED | 4 | 4 | 4 | 0 | NULL | NULL | NULL | NULL |
+-----------------------------------------------------+--------+-------------+-------------+-------------+-------------+-------------+------------------+-----------------------+-------------------------+
1 Row(s) produced. Time Elapsed: 2.436s
5. view the records
>SELECT * FROM dbNameHere.schemaNameHere.stacko_58322339;
+---------+----------+-------+
| NUM1 | NUM2 | NUM3 |
|---------+----------+-------|
| 1 | 2 | 3 |
| 3922000 | 14733370 | 57256 |
| 3 | 2 | 1 |
| 4 | 5 | 6 |
+---------+----------+-------+
Can you try with a similar test as this?
EDIT: A quick look at your data shows many of your numeric fields appear to start with commas, so something definitely amiss with the data.
Assuming your numbers are European formatted , decimal place, and . thousands, reading the numeric formating help, it seems Snowflake does not support this as input. I'd open a feature request.
But if you read the column in as text then use REPLACE like
SELECT '100,1234'::text as A
,REPLACE(A,',','.') as B
,TRY_TO_DECIMAL(b, 20,10 ) as C;
gives:
A B C
100,1234 100.1234 100.1234000000
safer would be to strip placeholders first like
SELECT '1.100,1234'::text as A
,REPLACE(A,'.') as B
,REPLACE(B,',','.') as C
,TRY_TO_DECIMAL(C, 20,10 ) as D;

Periodic "Opening tables" on MySQL Insert

I have a long-running insert that shows a status that toggles between "NULL" and "Opening tables"
| 5 | mckelvey | mushroom.jpl.nasa.gov:57050 | smap_ampcs_v5_2_0 | Query | 7105 | Opening tables | INSERT INTO ChannelValue
| 5 | mckelvey | mushroom.jpl.nasa.gov:57050 | smap_ampcs_v5_2_0 | Query | 7114 | NULL | INSERT INTO ChannelValue
It does this continually. Show open tables shows 10 opened tables, 5 in use.
Global status "Opened_tables" is 190 and does not increase. table_open_cache is 1024.
Is something wrong? Why does it constantly open tables for an insert already in progress?
The insert is below. The tables in the v4 database are MyISAM, and v5_2 are InnoDB. ComboParent is temporary and MyISAM. No errors in the server log.
INSERT INTO ChannelValue
SELECT
cv.sessionId,
cv.hostId,
cv.sessionFragment,
1 AS id,
cv.id AS channelDataId,
cp.parentId,
NULL AS uniqueId,
IF (cv.dnUnsignedValue IS NOT NULL,
CONVERT(cv.dnUnsignedValue, SIGNED),
cv.dnIntegerValue) AS dnPackedValue,
cv.dnDoubleValue,
sv.stringId,
cv.eu,
SET_FLAGS(cv.dnDoubleFlag,
cv.euFlag,
cv.dnAlarmState,
cv.euAlarmState,
cv.dnUnsignedValue) AS flags
FROM smap_ampcs_v4_0_0.ChannelValue AS cv
STRAIGHT_JOIN ComboParent AS cp
ON ((cv.sessionId = cp.sessionId) AND
(cv.hostId = cp.hostId) AND
(cv.sessionFragment = cp.sessionFragment) AND
(cv.sclkCoarse = cp.sclkCoarse) AND
(cv.sclkFine = cp.sclkFine) AND
(cv.ertCoarse = cp.ertCoarse) AND
(cv.ertFine = cp.ertFine) AND
(cv.scetCoarse = cp.scetCoarse) AND
(cv.scetFine = cp.scetFine) AND
(cv.dssId = cp.dssId) AND
(cv.vcid <=> cp.vcid) AND
(cv.isRealtime = cp.isRealtime))
LEFT JOIN StringValue AS sv
ON (
(cv.hostId = sv.hostId) AND
(cv.sessionId = sv.sessionId) AND
(cv.sessionFragment = sv.sessionFragment) AND
(0 = sv.fromSse) AND
(cv.dnStringValue = sv.stringValue)
)
WHERE (cv.packetId IS NULL)

Save an excel/cvs file from mysql using PHP [run via a cron job]

I have some results from a mysql table that I would like to export, I am currently able to click a download link and download an xls but I would like to be able to run this via a cron job and have the weekly results email to me.
I have looked at doing this from Mysql and save it out as a csv directly.
However I am struggling with the SQL, the table format is as follows
btFormQuestions (some columns ommitted)
+-------+---------------+----------+-----------+
| msqID | questionSetId | Question | InputType |
|-------+---------------+----------+-----------+
| 1 | 123456 | Name | field |
| 2 | 123456 | Telephone| field |
| 3 | 123456 | Email | email |
| 4 | 123456 | Enquiry | test |
btFormAnswers
+-----+------+-------+-----------------+
| aID | asID | msqID | answer |
+-----+------+-------+-----------------+
| 1 | 1 | 1 | Sean |
| 2 | 1 | 2 | 0800 0 |
| 3 | 1 | 3 | se#te.com |
| 4 | 1 | 4 | Asking Question |
btFormAnswersSet
+------+---------------+---------------------+
| asID | questionSetId | created |
+------+---------------+---------------------+
| 1 | 123456 | 2013-04-30 11:07:55 |
The sql queries, I am currently using to get the information into PHP and into an array is as follows:
//get answers sets
$sql='SELECT * FROM btFormAnswerSet AS aSet '.
'WHERE aSet.questionSetId='.$questionSet.' ORDER BY created DESC LIMIT 0, 100;
$answerSetsRS=$db->query($sql);
//load answers into a nicer multi-dimensional array
$answerSets=array();
$answerSetIds=array(0);
while( $answer = $answerSetsRS->fetchRow() ){
//answer set id - question id
$answerSets[$answer['asID']]=$answer;
$answerSetIds[]=$answer['asID'];
}
//get answers
$sql='SELECT * FROM btFormAnswers AS a WHERE a.asID IN ('.join(',',$answerSetIds).')';
$answersRS=$db->query($sql);
//load answers into a nicer multi-dimensional array
while( $answer = $answersRS->fetchRow() ){
//answer set id - question id
$answerSets[$answer['asID']]['answers'][$answer['msqID']]=$answer;
}
return $answerSets;
I would like to be able to do one of the following
A.) Move all of this into one query to be able to get the following sort of result
+---------------+------+-----------+-----------+-----------------+
| QuestionSetID | Name | Telephone | Email | Enquiry |
+---------------+------+-----------+-----------+-----------------+
| 123456 | Sean | 0800 0 | se#te.com | Asking Question |
(I did try this with various joins but could not get them quite right)
If I could get this to work I would not mind saving as a CSV
B.) Output the returned array as excel file that can be saved to a location on the server,
The current code creates a html table from the array
The code is a little long so I am only pasting the top and bottom bits here
//fwrite($handle, $excelHead);
//fwrite($handle, $row);
//fflush($handle);
ob_start();
header("Content-Type: application/vnd.ms-excel");
echo "<table>\r\n";
//Question headers go here
foreach($answerSets as $answerSetId=>$answerSet){
$questionNumber=0;
$numQuestionsToShow=2;
echo "\t<tr>\r\n";
echo "\t\t<td>". $dateHelper->getSystemDateTime($answerSet['created'])."</td>\r\n";
foreach($questions as $questionId=>$question){
$questionNumber++;
if ($question['inputType'] == 'checkboxlist'){
$options = explode('%%', $question['options']);
$subanswers = explode(',', $answerSet['answers'][$questionId]['answer']);
for ($i = 1; $i <= count($options); $i++)
{
echo "\t\t<td align='center'>\r\n";
if (in_array(trim($options[$i-1]), $subanswers)) {
// echo "\t\t\t".$options[$i-1]."\r\n";
echo "x";
} else {
echo "\t\t\t \r\n";
}
echo "\t\t</td>\r\n";
//fwrite($handle, $node);
//fflush($handle);
}
}elseif($question['inputType']=='fileupload'){
echo "\t\t<td>\r\n";
$fID=intval($answerSet['answers'][$questionId]['answer']);
$file=File::getByID($fID);
if($fID && $file){
$fileVersion=$file->getApprovedVersion();
echo "\t\t\t".''.$fileVersion->getFileName().''."\r\n";
}else{
echo "\t\t\t".t('File not found')."\r\n";
}
echo "\t\t</td>\r\n";
}else{
echo "\t\t<td>\r\n";
echo "\t\t\t".$answerSet['answers'][$questionId]['answer'].$answerSet['answers'][$questionId]['answerLong']."\r\n";
echo "\t\t</td>\r\n";
}
//fwrite($handle, $node);
//fflush($handle);
}
echo "\t</tr>\r\n";
//fwrite($handle, $row);
//fflush($handle);
}
echo "</table>\r\n";
//fwrite($handle, $excelFoot);
//fflush($handle);
//fclose($handle);
file_put_contents($filePath, ob_get_clean());
I can get the file to save to the directory but I am having issues setting it as an Excel file, I have also tried, playing with Fwrite (instead of the buffer) with the similar results
can anyone help, or point me in the right location.
Thank you,
Sean
I would do this from within concrete5. That way you get all the form-results-related models, plus the various helpers (like email).
For more info about jobs, see http://www.concrete5.org/documentation/developers/system/jobs/ . To run from a cron job, see http://www.concrete5.org/documentation/how-tos/developers/how-to-run-certain-jobs-via-cron/ .
It looks like you've got the code to generate the answers, and put it into an array, but you might want to look at something like https://github.com/concrete5/concrete5/blob/master/web/concrete/core/controllers/blocks/form_statistics.php#L32 . I'm not positive that's exactly what you need, but I do know that the dashboard page builds that answers table for you, so the code clearly exists somewhere.
Finally, to create an excel file, elsewhere c5 uses the "put it into a table and call it .xls" method, which works with excel and open office. I'm not sure exactly what you mean by "having issues setting it as Excel", but it sounds like this is your issue at the moment. If something is getting saved to the file, then you should post the file contents and you/we can work backwards as to what is causing the issue. It's probably just misformatted HTML or something.
Finally, to send the email, you can use the Mail Helper, but that doesn't currently allow for attachments (there's a pull request in github that does, and that you could use to override the mail helper with). Typically, the "best practice" would be to send it as a link.

SQL Server 2008: U and X locks - deadlock on one table without any indexes. How?

I observe really strange behavior of my DB.
I have one small table (about 300 rows) where one field is continuously updated.
And I was getting a lot of deadlocks there - update of the table was deadlocking the similar update of the same table (U lock vs X lock).
So I decided to remove the clustered index (so table doesn't have any indexes now) to fix the deadlocks. But it didn't help and now I'm getting the deadlock between the U and X lock modes.
So one table, no indexes and 2 sessions updating one table
Victim
update dbo.MyNumber set
#nextno = nextno = nextno + 1
where [type] = #type
and yearid = #yearid
Winning query:
update dbo.MyNumber set
#nextno = nextno = nextno + 1
where [type] = #TYPE
and yrclosedyn = 0
Rows are definitely different but the page is the same.
How Is it possible? Maybe it is connected to the lock escalation, or ...?
I really appreciate any suggestions.
Thanks in advance
Mike
DEADLOCK XML:
<deadlock-list>
<deadlock victim="process6c492e8">
<process-list>
<process id="processb6a988" taskpriority="0" logused="1848" waitresource="RID: 5:1:127478:16" waittime="3478" ownerId="17153439" transactionname="user_transaction" lasttranstarted="2012-12-18T12:31:40.147" XDES="0xffffffff89482258" lockMode="U" schedulerid="7" kpid="4248" status="suspended" spid="98" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2012-12-18T12:31:49.913" lastbatchcompleted="2012-12-18T12:31:49.913" clientapp="PenAIR" hostname="S16047425" hostpid="9300" loginname="sa" isolationlevel="read committed (2)" xactid="17153439" currentdb="5" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
<executionStack>
<frame procname="MYDATABASE.dbo.MyStoredProcedure" line="92" stmtstart="9062" stmtend="9388" sqlhandle="0x030005002d15a05e58b5710016a100000100000000000000">
UPDATE dbo.MyNumber Set
#NEXTNO = NEXTNO = NEXTNO + 1
WHERE (TYPE = #TYPE) AND (YRCLOSEDYN = 0) </frame>
</executionStack>
<inputbuf>
Proc [Database Id = 5 Object Id = 1587549485] </inputbuf>
</process>
<process id="process6c492e8" taskpriority="0" logused="192" waitresource="RID: 5:1:127478:20" waittime="8252" ownerId="17153562" transactionname="user_transaction" lasttranstarted="2012-12-18T12:31:45.140" XDES="0x6583b1e0" lockMode="U" schedulerid="13" kpid="19824" status="suspended" spid="143" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2012-12-18T12:31:45.140" lastbatchcompleted="2012-12-18T12:31:45.140" clientapp="PenAIR" hostname="S16047425" hostpid="4760" loginname="sa" isolationlevel="read committed (2)" xactid="17153562" currentdb="5" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
<executionStack>
<frame procname="MYDATABASE.dbo.MyStoredProcedure" line="92" stmtstart="9062" stmtend="9388" sqlhandle="0x030005002d15a05e58b5710016a100000100000000000000">
UPDATE dbo.MyNumber Set
#NEXTNO = NEXTNO = NEXTNO + 1
WHERE ([TYPE] = #TYPE) AND (YRCLOSEDYN = 0) </frame>
</executionStack>
<inputbuf>
Proc [Database Id = 5 Object Id = 1587549485] </inputbuf>
</process>
</process-list>
<resource-list>
<ridlock fileid="1" pageid="127478" dbid="5" objectname="MYDATABASE.dbo.MyNumber" id="lock464f2640" mode="X" associatedObjectId="72057594131120128">
<owner-list>
<owner id="processb6a988" mode="X"/>
</owner-list>
<waiter-list>
<waiter id="process6c492e8" mode="U" requestType="wait"/>
</waiter-list>
</ridlock>
<ridlock fileid="1" pageid="127478" dbid="5" objectname="MYDATABASE.dbo.MyNumber" id="lockfffffffff1974980" mode="X" associatedObjectId="72057594131120128">
<owner-list>
<owner id="process6c492e8" mode="X"/>
</owner-list>
<waiter-list>
<waiter id="processb6a988" mode="U" requestType="wait"/>
</waiter-list>
</ridlock>
</resource-list>
</deadlock>
</deadlock-list>
Shredding your deadlock graph into tabular form shows the following.
+----------+-------------------------+-----------+-----------+------------+----------+--------------------+--------------------+---------+
| LockMode | LockedObject | TranCount | LockEvent | LockedMode | WaitMode | WaitResource | IsolationLevel | LogUsed |
+----------+-------------------------+-----------+-----------+------------+----------+--------------------+--------------------+---------+
| U | MYDATABASE.dbo.MyNumber | NULL | rid | X | U | RID: 5:1:127478:20 | read committed (2) | 192 |
| U | MYDATABASE.dbo.MyNumber | NULL | rid | X | U | RID: 5:1:127478:16 | read committed (2) | 1848 |
+----------+-------------------------+-----------+-----------+------------+----------+--------------------+--------------------+---------+
You still haven't answered my question in the comments as to whether the sequence generation code is only called once in every transaction.
It is easy to generate a deadlock graph similar to the one in your post if not.
Setup
CREATE TABLE dbo.MyNumber
(
[TYPE] CHAR(1),
YRCLOSEDYN INT,
NEXTNO INT
)
INSERT INTO dbo.MyNumber
VALUES ('X', 0, 1),
('Y', 0, 1)
GO
CREATE PROC MyStoredProcedure #TYPE CHAR(1),
#NEXTNO INT OUTPUT
AS
UPDATE dbo.MyNumber
SET #NEXTNO = NEXTNO = NEXTNO + 1
WHERE ( [TYPE] = #TYPE )
AND ( YRCLOSEDYN = 0 )
Connection 1
BEGIN TRAN
DECLARE #NEXTNO INT
EXEC MyStoredProcedure 'Y', #NEXTNO OUTPUT
WAITFOR DELAY '00:00:05'
EXEC MyStoredProcedure 'X', #NEXTNO OUTPUT
ROLLBACK
Connection 2
(Run immediately after executing the code in connection 1)
BEGIN TRAN
DECLARE #NEXTNO INT
EXEC MyStoredProcedure 'X', #NEXTNO OUTPUT
EXEC MyStoredProcedure 'Y', #NEXTNO OUTPUT
ROLLBACK
The deadlock graph output from that is very similar to the one above
+----------+-------------------------+-----------+-----------+------------+----------+-----------------+--------------------+---------+
| LockMode | LockedObject | TranCount | LockEvent | LockedMode | WaitMode | WaitResource | IsolationLevel | LogUsed |
+----------+-------------------------+-----------+-----------+------------+----------+-----------------+--------------------+---------+
| U | MYDATABASE.dbo.MyNumber | 2 | rid | X | U | RID: 11:1:144:1 | read committed (2) | 248 |
| U | MYDATABASE.dbo.MyNumber | 2 | rid | X | U | RID: 11:1:144:0 | read committed (2) | 248 |
+----------+-------------------------+-----------+-----------+------------+----------+-----------------+--------------------+---------+
If this is the explanation for your issue you will need to ensure that you update the Sequences in the same order in all transactions (I assume there must be some good reason why you can't just use an IDENTITY column based solution)