I'm changing my existing Yesod application to run on a SQL backend instead of mongo. The generated table structure is more strict then the mongo backend. Foreign key references should be created correctly on insert.
postFeedingsR :: Handler RepJson
postFeedingsR = do
muser <- maybeAuth
parsedFeeding <- parseJsonBody_ --get content as JSON
let userId = getUserId muser
let feedingWithUser = Feeding (feedingDate parsedFeeding) (feedingSide parsedFeeding) (feedingTime parsedFeeding) (feedingExcrements parsedFeeding) (feedingRemarks parsedFeeding) userId --should be linked to user..
fid <- runDB $ insert feedingWithUser --store in database
--runDB $ update fid [ FeedingUserId =. userId ] --Old mongo style of linking the feeding to the user
sendResponseCreated $ FeedingR fid --return the id
I try to update the Entity I get from parseJsonBody with the user UID from the maybeAuth. However this gives me the following error:
No instance for (aeson-0.6.0.2:Data.Aeson.Types.Class.FromJSON
(FeedingGeneric backend0))
arising from a use of `parseJsonBody_'
Possible fix:
add an instance declaration for
(aeson-0.6.0.2:Data.Aeson.Types.Class.FromJSON
(FeedingGeneric backend0))
In a stmt of a 'do' block: parsedFeeding <- parseJsonBody_
In the expression:
do { muser <- maybeAuth;
parsedFeeding <- parseJsonBody_;
let userId = getUserId muser;
let feedingWithUser
= Feeding
(feedingDate parsedFeeding)
(feedingSide parsedFeeding)
(feedingTime parsedFeeding)
(feedingExcrements parsedFeeding)
(feedingRemarks parsedFeeding)
userId;
.... }
In an equation for `postFeedingsR':
postFeedingsR
= do { muser <- maybeAuth;
parsedFeeding <- parseJsonBody_;
let userId = ...;
.... }
I'm not sure why this happens. Could anyone put me in the right direction to solve this?
Solved by changing the auth line to:
Entity uid u <- requireAuth
and by adding the function:
addUserToFeeding :: UserId -> Feeding -> Feeding
addUserToFeeding uid Feeding {feedingDate=date, feedingSide=side, feedingTime=time, feedingExcrements=ex, feedingRemarks=remarks} = Feeding date side time ex remarks uid
to create a new Feeding with associated user. This Feeding can then be stored in the normal way in Yesod:
let feedingWithUser = addUserToFeeding uid parsedFeeding
fid <- runDB $ insert feedingWithUser --store in database
Related
I have a table in MYSQL that contains the user interactions with a Web Page, I needed to extract the rows for the users where the date of that interaction is lower than a certain benchmark date and that benchmark date is different for each customer (I extract that date from a different database).
My approach was to set a json variable in which the key is a user and the value is the benchmark date, and used it in the query to extract the intended fields.
Example in R:
#MainDF contains the user and the benchmark date from a different database
json_str <- mapply(function(uid, bench_date){
paste0(
'{','"',cust,'"', ':', '"', bench_date, '"','}'
)
}, MainDF[, 'uid'],
MainDF[, 'date']
)
json_str <- paste0("'", '[', paste0(json_str , collapse = ','), ']', "'")
temp_var <- paste('set #test=', json_str)
The intention was to make temp_var to be like:
set #test= '{"0001":"2010-05-05",
"0012":"2015-05-05",
"0101":"2018-07-20"}'
but it actually looks like :
set #test= '{\"0001\":\"2010-05-05\",
\"0012\":\"2015-05-05\",
\"0101\":\"2018-07-20\"}'
then create the main query:
main_Q <- "select user_id, date
from interaction
where 1=1
and json_contains(json_keys(#test), concat('\"',user_id,'\"')) = 1
and date <= json_unquote(json_extract(#test,
concat('$.','\"',user_id, '\"')
)
)
"
For the execution, first, set the temporal variable and then execute the main query
dbSendQuery(connection, temp_var)
resp <- dbSendQuery(connection, main_Q )
target_df <- fetch(resp, n=-1)
dbClearResult(resp )
When I test a fraction of it in a SQL IDE it does works. However, in R it doesn't return anything.
I think that the issue is that R escape the double quotes in temp_var and SQL end up reading
set #test= '{\"0001\":\"2010-05-05\",
\"0012\":\"2015-05-05\",
\"0101\":\"2018-07-20\"}'
which is not won't work.
For example if I execute:
set #test= '{"0001":"2010-05-05",
"0012":"2015-05-05",
"0101":"2018-07-20"}'
select json_keys(#test)
it will return an array with the keys, but that is not the case with
set #test= '{\"0001\":\"2010-05-05\",
\"0012\":\"2015-05-05\",
\"0101\":\"2018-07-20\"}'
select json_keys(#test)
I am not sure how to solve the issue, but I need double quotes to specify the JSON. Is there any other approach that I should try or a way to make this work?
First, I think it is generally better to use a well-known library/package for converting to/from JSON, for several reasons.
This gives you a string that you should be able to place just about anywhere.
json_str <- jsonlite::toJSON(setNames(as.list(MainDF$date), MainDF$uid), auto_unbox=TRUE)
json_str
# {"0001":"2010-05-05","0012":"2015-05-05","0101":"2018-07-20"}
And while looking at the object on the R console will give the escaped-doublequotes,
as.character(json_str)
# [1] "{\"0001\":\"2010-05-05\",\"0012\":\"2015-05-05\",\"0101\":\"2018-07-20\"}"
that is merely R's representation (shows all strings within double-quotes, and therefore needs to escape any double-quotes within the string).
Adding it into some script should be straight-forward:
cat(paste('set #test=', sQuote(json_str)), '\n')
# set #test= '{"0001":"2010-05-05","0012":"2015-05-05","0101":"2018-07-20"}'
I'm assuming that having each on its own row is not critical. If it is, and indentation is important, perhaps this is more your style:
spaces <- strrep(' ', 2+nchar('set #test = '))
cat(paste0('set #test = ', sQuote(gsub(",", paste0(",\n", spaces), json_str))), '\n')
# set #test = '{"0001":"2010-05-05",
# "0012":"2015-05-05",
# "0101":"2018-07-20"}'
Data:
MainDF <- read.csv(stringsAsFactors=FALSE, colClasses='character', text='
uid,date
0001,2010-05-05
0012,2015-05-05
0101,2018-07-20')
I am trying to use the following function in R:
heritblup <- function(name) {
library(RMySQL)
library(DBI)
con <- dbConnect(RMySQL::MySQL(),
dbname ="mydab",
host = "localhost",
port = 3306,
user = "root",
password = "")
value1 <- 23;
rss<- paste0 ("INSERT INTO namestable
(myvalue, person)
VALUES ('$value1', '",name,"')")
rs <<- dbGetQuery (con, rss)
}
heritblup("Tommy")
But I keep getting this error:
Error in as.character.default()
: no method for coercing this S4 class to a vector Called from:
as.character.default()
I tried to change the paste function to this:
rss<- paste0 ("INSERT INTO namestable
(myvalue, person)
VALUES ($value1, ",name,")")
the error persists;
I have no idea whats wrong.
Please help
Couple of issues in code. I'm not sure if OP is attempting to insert records in database or fetch from database.
Assuming, based on query that he is expecting to insert data in database table.
The rule is that query should be prepared in R the way it will be executed in MySQL. Value replacement (if any) should be performed in R as MySQL engine will not have any idea about variables from R.
Hence, the query preparation steps should be done as:
rss <- sprintf("INSERT INTO namestable (myvalue, person) VALUES (%d, '%s')", value1, name)
# "INSERT INTO namestable (myvalue, person) VALUES (23, 'test')"
If data insert is goal then dbGetQuery is not right option per R documentation instead dbSendStatement() should be used for data manipulation. The reference from help suggest:
However, callers are strongly encouraged to use dbSendStatement() for
data manipulation statements.
Based on that query execution line should be:
rs <- dbSendStatement(con, rss)
ret_val <- dbGetRowsAffected(rs)
dbClearResult(rs)
dbDisconnect(con)
return(ret_val)
I am following this tutorial which uses scotty with persistent to create a simple API .
However, I am trying to create a simple api with scotty and mysql simple library.
Now I am stuck at one point in code .
In the below code I am not able to convert getUser function to type "ActionT Error ConfigM" because of which my code is failing.
Can anyone help me with understanding how I can convert getUser function to achieve needed type signature?
Code
type Error = Text
type Action = ActionT Error ConfigM ()
config :: Config
config = Config
{ environment = Development
,db1Conn = connect connectionInfo
}
main :: IO ()
main = do
runApplication config
runApplication :: Config -> IO ()
runApplication c = do
o <- getOptions (environment c)
let r m = runReaderT (runConfigM m) c
scottyOptsT o r application
application :: ScottyT Error ConfigM ()
application = do
e <- lift (asks environment)
get "/user" getTasksA
getTasksA :: Action
getTasksA = do
u <- getUser
json u
getUser :: IO User
getUser = do
e <- asks environment
conn <- db1Conn config
[user]<- query_ conn "select login as userId, email as userEmail from member limit 1"
return user
Error
• Couldn't match type ‘IO’ with ‘ActionT Error ConfigM’
Expected type: ActionT Error ConfigM User
Actual type: IO User
• In a stmt of a 'do' block: u <- getUser
In the expression:
do { u <- getUser;
json u }
In an equation for ‘getTasksA’:
getTasksA
= do { u <- getUser;
json u }
You left out plenty of code (imports and pragmas and the definitions of User, please include that next time - see MCVE.
But now to your question:
I would change the Action type to the following
type Action a = ActionT Error ConfigM a
then getTasksA has the following type signature
getTasksA :: Action ()
getTasksA = do
u <- getUser
json u
(alternatively you can write this as getTasksA = getUser >>= json)
and getUser
getUser :: Action User
getUser = do
e <- asks environment
conn <- db1Conn config
[user] <- liftIO $ query_ conn "select login as userId, ..."
return user
A few remarks
[user] <- liftIO $ query .. is a bad idea - if no user is found this crashes your application - try to write total functions and pattern matches. Better return a Maybe User.
getUser :: Action (Maybe User)
getUser = do
e <- asks environment
conn <- db1Conn config
fmap listToMaybe . liftIO $ query_ conn "select login as userId, ..."
if you can wrap your head around it, rather use persistent than writing your SQL queries by hand - this is quite error prone, especially when refactoring, just imagine renaming userId to userID.
you ask several times for the environment but then don't use it. Compile with -Wall or even -Werror to get a warning or even elevate warnings to compile errors (which is a good idea for production settings.
Odoo 9 custom module binary field attachment=True parameter added later after that new record will be stored in filesystem storage.
Binary Fields some old records attachment = True not used, so old record entry not created in ir.attachment table and filesystem not saved.
I would like to know how to migrate old records binary field value store in filesystem storage?. How to create/insert records in ir_attachment row based on old records binary field value? Is any script available?
You have to include the postgre bin path in pg_path in your configuration file. This will restore the file store that contains the binary fields
pg_path = D:\fx\upsynth_Postgres\bin
I'm sure that you no longer need a solution to this as you asked 18 months ago, but I have just had the same issue (many gigabytes of binary data in the database) and this question came up on Google so I thought I would share my solution.
When you set attachment=True the binary column will remain in the database, but the system will look in the filestore instead for the data. This left me unable to access the data from the Odoo API so I needed to retrieve the binary data from the database directly, then re-write the binary data to the record using Odoo and then finally drop the column and vacuum the table.
Here is my script, which is inspired by this solution for migrating attachments, but this solution will work for any field in any model and reads the binary data from the database rather than from the Odoo API.
import xmlrpclib
import psycopg2
username = 'your_odoo_username'
pwd = 'your_odoo_password'
url = 'http://ip-address:8069'
dbname = 'database-name'
model = 'model.name'
field = 'field_name'
dbuser = 'postgres_user'
dbpwd = 'postgres_password'
dbhost = 'postgres_host'
conn = psycopg2.connect(database=dbname, user=dbuser, password=dbpwd, host=dbhost, port='5432')
cr = conn.cursor()
# Get the uid
sock_common = xmlrpclib.ServerProxy ('%s/xmlrpc/common' % url)
uid = sock_common.login(dbname, username, pwd)
sock = xmlrpclib.ServerProxy('%s/xmlrpc/object' % url)
def migrate_attachment(res_id):
# 1. get data
cr.execute("SELECT %s from %s where id=%s" % (field, model.replace('.', '_'), res_id))
data = cr.fetchall()[0][0]
# Re-Write attachment
if data:
data = str(data)
sock.execute(dbname, uid, pwd, model, 'write', [res_id], {field: str(data)})
return True
else:
return False
# SELECT attachments:
records = sock.execute(dbname, uid, pwd, model, 'search', [])
cnt = len(records)
print cnt
i = 0
for res_id in records:
att = sock.execute(dbname, uid, pwd, model, 'read', res_id, [field])
status = migrate_attachment(res_id)
print 'Migrated ID %s (attachment %s of %s) [Contained data: %s]' % (res_id, i, cnt, status)
i += 1
cr.close()
print "done ..."
Afterwards, drop the column and vacuum the table in psql.
I would like to extract all Create Statements in my 50 MySQL Databases via SHOW CREATE TABLE db.table or SHOW CREATE TABLE db1.mytableor SHOW CREATE TABLE db2.sometableor SHOW CREATE TABLE db3.mytable1. Thus each of the DBs has some tables inside db1(table,mytable...) db2(table1,sometable) and so on
To illustrate the DBs via a example query:
SELECT *
FROM db.table1 m
LEFT JOIN db1.sometable o ON m.id = o.id
LEFT JOIN db2.sometables t ON p.id=t.id
LEFT JOIN db3.sometable s ON s.column='john'
library(RMySQL)
library(DBI)
con <- dbConnect(RMySQL::MySQL(),
username = "",
password = "",
host = "",
port = 3306,
dbname= mydbname)# when using dbs<-dbGetQuery(con ,"SHOW DATABASES") I have to ## dbname= mydbname## to get all DBs
Using dbs<-dbGetQuery(con ,"SHOW DATABASES")I can extract all 50 Databases in the dbConnection as character vector. I would like loop over each DB in the dbsand apply SHOW CREATE TABLE to each row/db. I suppose I have to parse the each row/db into dbname= mydbnameand dbs<-dbGetQuery(con ,"SHOW CREATE TABLE"). But I just cant figure out how to make the loops
I tried:
apply(dbs, 1, function(row) {
dbname <- row[]
for (i in 1:length(dbname)) {
create<-dbGetQuery(con,"SHOW CREATE TABLE") }
})
But that doesnt seem right. I suppose I have to include the con into the loop somehow. Otherwise I'll get:
Error in .local(drv, ...) : object 'dbname' not found
So I tried:
apply(dbs, 1, function(row) {
dbname <- row[]
for (i in 1:length(dbname)) {
con <- dbConnect(RMySQL::MySQL(),
username = "",
password = "",
host = "",
port = 3306,
dbname= [i])
create<-dbGetQuery(con,"SHOW CREATE TABLE") }})
I suppose that comes close to the solution but I miss something:
dbs<-dbGetQuery(con,"show databases")
library(foreach)
foreach(i = 1:(length(dbs))%dopar%{
query<-paste("SHOW CREATE TABLE",dbs[i])
creates<-dbGetQuery(con,query)
})
Consider this approach of importing a data frame of each database (leaving out the system ones, INFORMATION_SCHEMA and MYSQL) and their corresponding tables. Then, run SHOW CREATE TABLE statements. Finally, merge the original dataframe with binded dataframe of create statements.
Now, the one caveat is tables that repeat names across databases. To return distinct values of such combinations, the aggregate() by head function is used.
con <- dbConnect(RMySQL::MySQL(),
username = "****", password = "****",
host = "****", port = 3306,
dbname= "****")
dbtbls <- dbGetQuery(con, "SELECT `TABLE_SCHEMA` AS `Database`,
`TABLE_NAME` AS `Table`
FROM `INFORMATION_SCHEMA`.`TABLES`
WHERE `TABLE_TYPE` = 'BASE TABLE'
AND `TABLE_SCHEMA` NOT LIKE '%SCHEMA%'
AND `TABLE_SCHEMA` NOT LIKE '%MYSQL%' ")
# LIST OF SQL STATEMENTS
sql <- paste0("SHOW CREATE TABLE ", dbtbls$Database, ".", dbtbls$Table)
# LIST OF DATAFRAMES
createstmts <- lapply(sql, function(x) dbGetQuery(con, x))
dbDisconnect(con)
# ROW BIND LIST INTO ONE DATAFRAME TO MERGE WITH ORIGINAL
stmtsdf <- do.call(rbind, createstmts)
finaldf <- merge(dbtbls, stmtsdf, by='Table')
# RETURN DISTINCT RECORDS
finaldf <- aggregate(.~Database+Table, finaldf, FUN=head, 1)
mysqldump --no-data
does exactly what you are asking for. (There may be other parameters desirable to avoid/include CREATE DATABASE, etc.)
If the requirement is to subsequently pull the CREATEs into R, then I ask whether this is a one-time task, or a recurring task. For one-time, I would suggest that, overall, the mysqldump approach might be simpler.
First, you can just simply use
for (i in 1:length(dbs)) { }
Or you can look into apply functions, particularly, sapply. There you can do parsing per dbConnection string, connect and get all tables as list or vector. Then you can loop inside those to get create table statements.
So, it is basically apply inside apply.
For a good explanation of apply functions, you can look into http://www.r-bloggers.com/using-apply-sapply-lapply-in-r/