Right now my container's timezone is different with MySQL's, and I need to run a query to just update the time field on MySQL to its timezone.
Normally, I can run the query with "edited=NOW()", but with the Golang squirrel, it does not have a proper way to set this clause.
I cannot change both my app and mysql container's timezone, and I just need to update the date in DB.
Is there any way to do in Squirell properly?
Instead of Set(column, "NOW()"), you need to do Set(column, sq.Expr("NOW()"))
I assume you're using this squirrel package, next time give a link and ideally some example code. So I assume what you really want is sql like this:
update tablename set dt_edited=NOW() where id=1;
You could easily just build that with fmt.Sprintf safely, and execute it with the db driver directly or via sq.Exec().
sql := fmt.Sprintf("update tablename set dt_edited=NOW() where id=%d" tableA.ID)
Using squirrel you probably want something like:
db.Exec(update tablename set dt_edited=NOW() where id=?",tableA.ID)
I'm not sure on the exact syntax but something like that should work fine, you just need a query object to send sql to. Don't attempt to send dt_edited=NOW() as a param as that will get escaped the id you can safely pass in that way.
You’re passing it as a string. Try:
Set("dt_edited=NOW()")
Related
I've created a MySQL sproc which returns 3 separate result sets. I'm implementing the npm mysql package downstream to exec the sproc and get a result structured in json with the 3 result sets. I need the ability to filter the json result sets that are returned based on some type of indicator in each result set. For example, if I wanted to get the result set from the json response which deals specifically with Suppliers then I could use some type of js filter similar to this:
var supplierResultSet = mySqlJsonResults.filter(x => x.ResultType === 'SupplierResults');
I think SQL Server provides the ability to include a hard-coded column value in a SQL result set like this:
select
'SupplierResults',
*
from
supplier
However, this approach appears to be invalid in MySQL b/c MySQL Workbench is telling me that the sproc syntax is invalid and won't let me save the changes. Do you know if something like what I'm trying to achieve is possible in MySQL and if not then can you recommend alternative approaches that would help me achieve my ultimate goal of including some type of fixed indicator in each result set to provide a handle for downstream filtering of the json response?
If I followed you correctly, you just need to prefix * with the table name or alias:
select 'SupplierResults' hardcoded, s.* from supplier s
As far as I know, this is the SQL Standard. select * is valid only when no other expression is added in the selec clause; SQL Server is lax about this, but most other databases follow the standard.
It is also a good idea to assign a name to the column that contains the hardcoded value (I named it hardcoded in the above query).
In MySQL you can simply put the * first:
SELECT *, 'SupplierResults'
FROM supplier
Demo on dbfiddle
To be more specific, in your case, in your query you would need to do this
select
'SupplierResults',
supplier.* -- <-- this
from
supplier
Try this
create table a (f1 int);
insert into a values (1);
select 'xxx', f1, a.* from a;
Basically, if there are other fields in select, prefix '*' with table name or alias
I'm trying to build a Data Integration job uses pass through to extract data from a view in a MySQL database.
Wev'e been using pass through a lot in the project, mostly extracting data from Redshift,
however with MySQL I was not able to do make it work properly.
It keeps complaining a table is missing even though when pass through is off, view is found and data is extracted...
tried every trick I know, starting from enabling case-sensitive DBMS object names, to manually remove single/double quotes from the statement just in case MySQL confuses confuses it with something else...
No luck.
ODBC driver is [MySQL][ODBC 5.3(a) Driver][mysqld-5.5.53].
Ran on a Windows environment.
Any idea how to solve this?
Thank you in advance.
EDIT
So, first of all, one correction (even though not that important - I extract from a view, not a table).
This is the code generated by SAS Create Table transformation, pass through enabled. I only put an asterisk instead of the full list of columns:
proc sql;
connect to ODBC
(
READBUFF=10000 DATASRC="cmp.web_api" AUTHDOMAIN="MYSQL_CMP_Auth"
);
create table work."W7ZZZKOC"n as
select
*
from connection to ODBC
(
select
V_BI_ACCOUNT.ACCOUNT_NAME,
V_BI_ACCOUNT.ACQUISITION_SOURCE__C,
V_BI_ACCOUNT.ZUORA__ACTIVE__C,
V_BI_ACCOUNT.ADDRESS_LINE_1__C,
V_BI_ACCOUNT.ADDRESS_LINE_2__C,
V_BI_ACCOUNT.ADDRESS_LINE_3__C,
V_BI_ACCOUNT.AGREEMENT_DATE,
V_BI_ACCOUNT.AGREEMENT_LEGAL_CLAUSE_1__C,
V_BI_ACCOUNT.AGREEMENT_LEGAL_CLAUSE_2__C,
V_BI_ACCOUNT.PERSONBIRTHDATE,
V_BI_ACCOUNT.BLOCKED_REASON__C,
V_BI_ACCOUNT.BRAND__C,
V_BI_ACCOUNT.CPN__C,
V_BI_ACCOUNT.ACCCREATEDBYID,
V_BI_ACCOUNT.ACCCREATEDDATE,
V_BI_ACCOUNT.CURRENCY_PREFERENCE__C,
V_BI_ACCOUNT.CUSTOMER_FULL_NAME__PC,
V_BI_ACCOUNT.ACCOUNTID,
V_BI_ACCOUNT.ZUORA__CUSTOMERPRIORITY__C,
V_BI_ACCOUNT.DELIVERY_SALUTATION__C,
V_BI_ACCOUNT.DISPLAY_NAME,
V_BI_ACCOUNT.PERSONEMAIL,
V_BI_ACCOUNT.EMAILKEY__C,
V_BI_ACCOUNT.FACEBOOKKEY,
V_BI_ACCOUNT.FIRSTNAME,
V_BI_ACCOUNT.GENDER__C,
V_BI_ACCOUNT.PHONE,
V_BI_ACCOUNT.ACCLASTACTIVITYDATE,
V_BI_ACCOUNT.ACCLASTMODIFIEDDATE,
V_BI_ACCOUNT.LASTNAME,
V_BI_ACCOUNT.OTHER_EMAIL__C,
V_BI_ACCOUNT.PI_TYPE__C,
V_BI_ACCOUNT.ACCPARENTID,
V_BI_ACCOUNT.POSTCODE__C,
V_BI_ACCOUNT.PRIMARY_ACCOUNT_OF_THIS_CUSTOMER,
V_BI_ACCOUNT.ACCPRIMARY__C,
V_BI_ACCOUNT.ACCREASON_FOR_STATUS__C,
V_BI_ACCOUNT.ZUORA__SLA__C,
V_BI_ACCOUNT.ZUORA__SLASERIALNUMBER__C,
V_BI_ACCOUNT.SALUTATION,
V_BI_ACCOUNT.ACCSYSTEMMODSTAMP,
V_BI_ACCOUNT.PERSONTITLE,
V_BI_ACCOUNT.ZUORA__UPSELLOPPORTUNITY__C,
V_BI_ACCOUNT.X_CODE__C,
V_BI_ACCOUNT.ZUORA__ACCOUNT_ID__C,
V_BI_ACCOUNT.ZUORA__PAYMENTMETHODID__C,
V_BI_ACCOUNT.CITY,
V_BI_ACCOUNT.ORIGINAL_CREATED_DATE,
V_BI_ACCOUNT.SOURCE_SYSTEM_ID,
V_BI_ACCOUNT.STATUS,
V_BI_ACCOUNT.ZUORA__CONTACT_ID,
V_BI_ACCOUNT.ACCISDELETED,
V_BI_ACCOUNT.BILLING_ACCOUNT_NAME,
V_BI_ACCOUNT.ACZCREATEDDATE,
V_BI_ACCOUNT.ACZSYSTEMMODSTAMP,
V_BI_ACCOUNT.ACZLASTACTIVITYDATE,
V_BI_ACCOUNT.ZUORA__ACCOUNT__C,
V_BI_ACCOUNT.ZUORA__ACCOUNTNUMBER__C,
V_BI_ACCOUNT.ZUORA__AUTOPAY__C,
V_BI_ACCOUNT.ZUORA__BALANCE__C,
V_BI_ACCOUNT.ZUORA__CREDITCARDEXPIRATION__C,
V_BI_ACCOUNT.ZUORA__CURRENCY__C,
V_BI_ACCOUNT.ZUORA__MRR__C,
V_BI_ACCOUNT.ZUORA__PAYMENTTERM__C,
V_BI_ACCOUNT.ZUORA__PURCHASEORDERNUMBER__C,
V_BI_ACCOUNT.ZUORA__LASTINVOICEDATE__C,
V_BI_ACCOUNT.COUNTRY_NAME,
V_BI_ACCOUNT.COUNTRY_CODE,
V_BI_ACCOUNT.FAVOURITE_FOOTBALL_CLUB,
V_BI_ACCOUNT.COUNTY
from
web_api.V_BI_ACCOUNT as V_BI_ACCOUNT
);
%rcSet(&sqlrc);
disconnect from ODBC;
quit;
And again, when I extract data without pass through - works successfully,
I found out the problem was a column name exceeds 32 positions.
As SAS supports up column names up to 32,
the query fails to find PRIMARY_ACCOUNT_OF_THIS_CUSTOMER as the original column name is PRIMARY_ACCOUNT_OF_THIS_CUSTOMER__C.
EDIT
One more thing I found out is, MySQL doesn't like specifying schema name nor aliases.
Therefore,
From clause to only specify table name i.e : 'from v_bi_account' rather than 'web_api.v_bi_account'
and do not use aliases i.e use 'from v_bi_account' rather than 'from v_bi_account as v_bi_account'
Thank you guys so much for your help.
I lived in philippines and when I'm inserting date in my database, It's inserting less 13 hours in my database. For example, our datetime here is 02-11-2016 21-04-00, when I insert, it's inserting 02-11-2016 08-04-00
I already used this:
date_default_timezone_set('Asia/Manila');
But no luck. I already put the website in my domain.
This should be done by
date_default_timezone_set('Asia/Manila');
Make sure you are uploading the correct files.
If it doesn't work; try making a page for error handling, then echo a variable that will set the time for this timezone, if it works. Use that variable instead.
The timezone you set is used for PHP as Bimal said, you need to check the server time as well, because time is very sensitive part of the information you don't want to miss or play with.
I've been in that situation couple of weeks back, there are two ways to fix it:
Set the server timezone to the correct timezone.
If the time difference is always and EXACTLY 13 hours, use this approach:
$new_time = date("Y-m-d H:i:s", strtotime('+13 hours')).
I ended up using the second method as the first one was not an option for me.
Best of Luck.
If I am not wrong you have pasted php function which will just set time_zone for php not for my_sql. mysql has global time_zone variable which can be set. You should use SET GLOBAL time_zone = timezone; or SET time_zone = timezone;.
If you dont have direct control of your DB server in that case you can pass the 2nd statement i.e set time_zone=...; as 1st statement.
Alternatively you can also use the convert_tz function to convert an already stored value.
This should be the simplest thing but for some reason it's eluding me completely.
I have a Sequel connection to a database named DB. It's using the Mysql2 engine if that's important.
I'm trying to update a single record in a table in the database. The short loop I'm using looks like this:
dataset = DB["SELECT post_id, message FROM xf_post WHERE message LIKE '%#{match}%'"]
dataset.each do |row|
new_message = process_message(row[:message])
# HERE IS WHERE I WANT TO UPDATE THE ROW IN THE DATABASE!
end
I've tried:
dataset.where('post_id = ?', row[:post_id]).update(message: new_message)
Which is what the Sequel cheat sheet recommends.
And:
DB["UPDATE xf_post SET message = ? WHERE post_id = ?", new_message, row[:post_id]]
Which should be raw SQL executed by the Sequel connector. Neither throws an error or outputs any error message (I'm using a logger with the Sequel connection). But both calls fail to update the records in the database. The data is unchanged when I query the database after running the code.
How can I make the update call function properly here?
Your problem is you are using a raw SQL dataset, so the where call isn't going to change the SQL, and update is just going to execute the raw SQL. Here's what you want to do:
dataset = DB[:xf_post].select(:post_id, :message).
where(Sequel.like(:message, "%#{match}%"))
That will make the where/update combination work.
Note that your original code has a trivial SQL injection vulnerability if match depends on user input, which this new code avoids. You may want to consider using Dataset#escape_like if you want to escape metacharacters inside match, otherwise if match depends on user input, it's possible for users to use very complex matching syntax that the database may execute slowly or not handle properly.
Note that the reason that
DB["UPDATE xf_post SET message = ? WHERE post_id = ?", new_message, row[:post_id]]
doesn't work is because it only creates a dataset, it doesn't execute it. You can actually call update on that dataset to run the query and return number of affected rows.
Assuming that all values of MBR_DTH_DT evaluate to a Date data type other than the value '00000000', could the following UPDATE SQL fail when running on multiple processors if the CAST were performed before the filter by racing threads?
UPDATE a
SET a.[MBR_DTH_DT] = cast(a.[MBR_DTH_DT] as date)
FROM [IPDP_MEMBER_DEMOGRAPHIC_DECBR] a
WHERE a.[MBR_DTH_DT] <> '00000000'
I am trying to find the source of the following error
Error: 2014-01-30 04:42:47.67
Code: 0xC002F210
Source: Execute csp_load_ipdp_member_demographic Execute SQL Task
Description: Executing the query "exec dbo.csp_load_ipdp_member_demographic" failed with the following error: "Conversion failed when converting date and/or time from character string.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
End Error
It could be another UPDATE or INSERT query, but the otehrs in question appear to have data that is proeprly typed from what I see,, so I am left onbly with the above.
No, it simply sounds like you have bad data in the MBR_DTH_DT column, which is VARCHAR but should be a date (once you clean out the bad data).
You can identify those rows using:
SELECT MBR_DTH_DT
FROM dbo.IPDP_MEMBER_DEMOGRAPHIC_DECBR
WHERE ISDATE(MBR_DTH_DT) = 0;
Now, you may only get rows that happen to match the where clause you're using to filter (e.g. MBR_DTH_DT = '00000000').
This has nothing to do with multiple processors, race conditions, etc. It's just that SQL Server can try to perform the cast before it applies the filter.
Randy suggests adding an additional clause, but this is not enough, because the CAST can still happen before any/all filters. You usually work around this by something like this (though it makes absolutely no sense in your case, when everything is the same column):
UPDATE dbo.IPDP_MEMBER_DEMOGRAPHIC_DECBR
SET MBR_DTH_DT = CASE
WHEN ISDATE(MBR_DTH_DT) = 1 THEN CAST(MBR_DTH_DT AS DATE)
ELSE MBR_DTH_DT END
WHERE MBR_DTH_DT <> '00000000';
(I'm not sure why in the question you're using UPDATE alias FROM table AS alias syntax; with a single-table update, this only serves to make the syntax more convoluted.)
However, in this case, this does you absolutely no good; since the target column is a string, you're just trying to convert a string to a date and back to a string again.
The real solution: stop using strings to store dates, and stop using token strings like '00000000' to denote that a date isn't available. Either use a dimension table for your dates or just live with NULL already.
Not likely. Even with multiple processors, there is no guarantee the query will processed in parallel.
Why not try something like this, assuming you're using SQL Server 2012. Even if you're not, you could write a UDF to validate a date like this.
UPDATE a
SET a.[MBR_DTH_DT] = cast(a.[MBR_DTH_DT] as date)
FROM [IPDP_MEMBER_DEMOGRAPHIC_DECBR] a
WHERE a.[MBR_DTH_DT] <> '00000000' And IsDate(MBR_DTH_DT) = 1
Most likely you have bad data are are not aware of it.
Whoops, just checked. IsDate has been available since SQL 2005. So try using it.