Compare time in gorm: time.Time or string - mysql

We are using gorm v1.9.11and mysql.
// before is string like "2019-01-15T06:31:14Z"
if len(before) > 0 {
// TODO: Determine time format
beforeDate, e := time.Parse(time.RFC3339, before)
if e != nil {
...
}
// TODO: Pass in string?
statement = statement.Where("created_at < ?", beforeDate)
}
I have two questions:
1, any problems in the above piece of codes? I found it took about 5 minutes to finish the query given before. If no before, it tooks 14 seconds.
2, it is OK to pass string like "2019-01-15T06:31:14Z" as argument in statement.Where("created_at < ?", beforeString)?
Thanks
UPDATE
We have an index on created_at, the query plan shows that it will filter 8millions rows based on created_At. I guess this is the reason.
Now, trying to avoid this index to be used.

Related

Finding the number of times a name is in a string

string = "Hamza flew his kite today. But Hamza forgot to play basketball"
def count_name():
count = 0
for sub_str in string:
if sub_str == "Hamza":
count += 1
return count
print(count_name())
My goal here was to find the number of times the name "Hamza", appears in the string.
But it keeps returning 0, instead of 2.
I tried setting the variable count = 0, so it can count how many times the name "Hamza" appears in the string.
The countName function takes a string as an argument and returns the number of times "Hamza" appears in it. Here is one way to count the number of times "Hamza" appears in a given string:
function countName(str) {
let count = 0;
let index = str.indexOf("Hamza");
while (index != -1) {
count++;
index = str.indexOf("Hamza", index + 1);
}
return count;
}
let string = "Hamza flew his kite today. But Hamza forgot to play basketball";
let count = countName(string);
console.log(count); // Output: 2
when you do for sub_str in string: you are looking at one letter a time (not one word at a time). You are checking...
'H'=='Hamza' which returns False
'a'=='Hamza' which returns False
'm'=='Hamza' which returns False...
that is why your count will never increase.
Luckily for you python has built in methods to make your life easy.
Try string.count('Hamza')

Golang Gorm scope broken after upgrade from v1 to v2

I was using Gorm v1. I had this scope for pagination which was working correctly:
func Paginate(entity BaseEntity, p *Pagination) func(db *gorm.DB) *gorm.DB {
return func(db *gorm.DB) *gorm.DB {
var totalRows int64
db.Model(entity).Count(&totalRows)
totalPages := int(math.Ceil(float64(totalRows) / float64(p.PerPage)))
p.TotalPages = totalPages
p.TotalCount = int(totalRows)
p.SetLinks(entity.ResourceName())
return db.Offset(p.Offset).Limit(p.PerPage)
}
}
and the way I called it:
if err := gs.db.Scopes(entities.Paginate(genre, p)).Find(&gs.Genres).Error; err != nil {
return errors.New(err.Error())
}
again, this used to work correctly, that is until I upgraded to Gorm v2. Now I'm getting the following message:
[0.204ms] [rows:2] SELECT * FROM genres LIMIT 2
sql: expected 9 destination arguments in Scan, not 1; sql: expected 9 destination arguments in Scan, not 1[GIN] 2022/06/18 - 00:41:00 | 400 | 1.5205ms | 127.0.0.1 | GET "/api/v1/genres"
Error #01: sql: expected 9 destination arguments in Scan, not 1; sql: expected 9 destination arguments in Scan, not 1
Now, I found out that the error is due to this line:
db.Model(entity).Count(&totalRows)
because if I remove it then my query is being correctly executed (obviously the data for TotalPages is not correct since it wasn't calculated). Going through the documentation I saw https://gorm.io/docs/method_chaining.html#Multiple-Immediate-Methods so my guess is that the connection used to get totalRows is being reused and have some residual data therefore my offset and limit query is failing.
I tried to create a new session for both the count and the offset queries:
db.Model(entity).Count(&totalRows).Session(&gorm.Session{})
return db.Offset(p.Offset).Limit(p.PerPage).Session(&gorm.Session{})
hoping that each one will use their own session but doesn't seem to work.
Any suggestions?
In case anyone needs it:
I did have to create a new session but I wasn't creating it the right way. I ended up doing:
countDBSession := db.Session(&gorm.Session{Initialized: true})
countDBSession.Model(entity).Count(&totalRows)
and that worked as expecting. So my scope now is:
// Paginate is a Gorm scope function.
func Paginate(entity BaseEntity, p *Pagination) func(db *gorm.DB) *gorm.DB {
return func(db *gorm.DB) *gorm.DB {
var totalRows int64
// we must create a new session to run the count, otherwise by using the same db connection
// we'll get some residual data which will cause db.Offset(p.Offset).Limit(p.PerPage) to fail.
countDBSession := db.Session(&gorm.Session{Initialized: true})
countDBSession.Model(entity).Count(&totalRows)
totalPages := int(math.Ceil(float64(totalRows) / float64(p.PerPage)))
p.TotalPages = totalPages
p.TotalCount = int(totalRows)
p.SetLinks(entity.ResourceName())
return db.Offset(p.Offset).Limit(p.PerPage)
}
}
notice that I'm using a new session to get the count via countDBSession which won't affect the latter use of the *db.Gorm parameter in return db.Offset(p.Offset).Limit(p.PerPage).Session(&gorm.Session{})

Safely perform DB migrations with Go

Let's say I have a web app that shows a list of posts. The post struct is:
type Post struct {
Id int64 `sql:",primary"`
Title string
Body string
}
It retrieves the posts with:
var posts []*Post
rows, err := db.QueryContext(ctx, "select * from posts;")
if err != nil {
return nil, oops.Wrapf(err, "could not get posts")
}
defer rows.Close()
for rows.Next() {
p := &Post{}
err := rows.Scan(
&p.Id,
&p.Title,
&p.Body,
)
if err != nil {
return nil, oops.Wrapf(err, "could not scan row")
}
posts = append(posts, p)
}
return posts, nil
All works fine. Now, I want to alter the table schema by adding a column:
ALTER TABLE posts ADD author varchar(62);
Suddenly, the requests to get posts result in:
sql: expected 4 destination arguments in Scan, not 3
which makes sense since the table now has 4 columns instead of the 3 stipulated by the retrieval logic.
I can then update the struct to be:
type Post struct {
Id int64 `sql:",primary"`
Title string
Body string
Author string
}
and the retrival logic to be:
for rows.Next() {
p := &Post{}
err := rows.Scan(
&p.Id,
&p.Title,
&p.Body,
&p.Author
)
if err != nil {
return nil, oops.Wrapf(err, "could not scan row")
}
posts = append(posts, p)
}
which solves this. However, this implies there is always a period of downtime between migration and logic update + deploy. How to avoid that downtime?
I have tried swapping the order of the above changes but this does not work, with that same request resulting in:
sql: expected 3 destination arguments in Scan, not 4
(which makes sense, since the table only has 3 columns at that point as opposed to 4);
and other requests resulting in:
Error 1054: Unknown column 'author' in 'field list'
(which makes sense, because at that point the posts table does not have an author column just yet)
You should be able to achieve your desired behavior by adapting the SQL Query to return the exact fields you want to populate.
SELECT Id , Title , Body FROM posts;
This way even if you add another column Author the query results only contain 3 values.

Golang Gorm working with slices and postgres' jsob field

I have a requirement to save either [] or a list with different integer values like [1, 7, 8]. These values can be anything between 1-31.
My struct for this field (DateOfMonth) is:
type Subscription struct {
gorm.Model
Enabled bool `gorm:"DEFAULT:True"`
Deleted bool `gorm:"DEFAULT:False"`
UserID uint `gorm:"not null"`
Cap int `gorm:"DEFAULT:-1"`
DateOfMonth []int64 `gorm:"type:json default '[]'::json"`
}
Now, I need to read this value in an API and compare it with the current_date.
For this, I have tried:
type Result struct {
ID uint
Email string
UniqueIdentifier string
Cap int
DateOfMonth []uint8
}
var subscriptions []Result
if err := db.Table("users").Select("users.id, users.email, users.unique_identifier, subscriptions.cap, subscriptions.date_of_month").Joins("join subscriptions on users.id = subscriptions.user_id").Where("subscriptions.subscription_type_id=? and users.is_verified=? and subscriptions.enabled=?", subscription_type_id, true, true).Find(&subscriptions).Error; err != nil {
c.JSON(http.StatusNotFound, gin.H{"error": true, "reason": "Subscribers not found!", "code": http.StatusBadRequest, "status": "failure"})
return
}
If I change DateOfMonth []uint8 to DateOfMonth []int64, it gives error.
The value that I receive in this field is a list of byte values
For example, [] -> [91 93] and [6] -> [91 54 93]
If I do, bytes.NewBuffer(s.DateOfMonth), I get the correct value but then I need to iterate over this slice to compare it with today's date. I have tried a lot of ways to get the actual value (6) in the loop (dom value) but to no avail.
// if len(s.DateOfMonth) > 0 {
// // date_of_month_new := binary.BigEndian.Uint64(date_of_month)
// todays_date_of_month := time.Now().Day()
// fmt.Println(todays_date_of_month) //, date_of_month, reflect.TypeOf(date_of_month))
// for _, dom := range s.DateOfMonth {
// fmt.Println("help", reflect.TypeOf(dom), dom, todays_date_of_month)
// // if dom == todays_date_of_month {
// // fmt.Println("matching", dom, todays_date_of_month)
// // }
// }
// }
I have even tried suggestions from various answers like this, this, this
What am I missing here? Your help will be highly appreciated.
Some of the errors that I got:
invalid sql type DateOfMonth (slice) for postgres
Golang cannot range over pointer to slice
cannot range over bytes.NewBuffer(s.DateOfMonth) (type *bytes.Buffer)
sql: Scan error on column index 4, name "date_of_month": unsupported Scan, storing driver.Value type []uint8 into type *[]int
Golang cannot range over pointer to slice
You are iterating over a pointer to a slice, instead of a slice. This means you will have to first de-reference your variable and then loop over it. Check out this example.
cannot range over bytes.NewBuffer(s.DateOfMonth) (type *bytes.Buffer)
You cannot range over type *bytes.Buffer. You can instead access the bytes of the type by using the method Buffer.Bytes(). Check out this example.
sql: Scan error on column index 4, name "date_of_month": unsupported Scan, storing driver.Value type []uint8 into type *[]int
Judging by the error I'm guessing this happens when you use type []int64 while scanning DateOfMonth. One of the possibilities for this error is your database storing the values as []uint8 instead of []int64.
invalid sql type DateOfMonth (slice) for postgres
I'll try and update my answer after I am able to reproduce this error successfully.

Use of custom expression in LINQ leads to a query for each use

I have the following problem: In our database we record helpdesk tickets and we book hours under tickets. Between those is a visit report. So it is: ticket => visitreport => hours.
Hours have a certain 'kind' which is not determined by a type indicator in the hour record, but compiled by checking various properties of an hour. For example, an hour which has a customer but is not a service hour is always an invoice hour.
Last thing I want is that the definitions of those 'kinds' roam everywhere in the code. They must be at one place. Second, I want to be able to calculate totals of hours from various collections of hours. For example, a flattened collection of tickets with a certain date and a certain customer. Or all registrations which are marked as 'solution'.
I have decided to use a 'layered' database access approach. The same functions may provide data for screen representation but also for a report in .pdf . So the first step gathers all relevant data. That can be used for .pdf creation, but also for screen representation. In that case, it must be paged and ordered in a second step. That way I don't need separate queries which basically use the same data.
The amount of data may be large, like the creation of year totals. So the data from the first step should be queryable, not enumerable. To ensure I stay queryable even when I add the summation of hours in the results, I made the following function:
public static decimal TreeHours(this IEnumerable<Uren> h, FactHourType ht)
{
IQueryable<Uren> hours = h.AsQueryable();
ParameterExpression pe = Expression.Parameter(typeof(Uren), "Uren");
Expression left = Expression.Property(pe, typeof(Uren).GetProperty("IsOsab"));
Expression right = Expression.Constant(true, typeof(Boolean));
Expression isOsab = Expression.Equal(Expression.Convert(left, typeof(Boolean)), Expression.Convert(right, typeof(Boolean)));
left = Expression.Property(pe, typeof(Uren).GetProperty("IsKlant"));
right = Expression.Constant(true, typeof(Boolean));
Expression isCustomer = Expression.Equal(Expression.Convert(left, typeof(Boolean)), Expression.Convert(right, typeof(Boolean)));
Expression notOsab;
Expression notCustomer;
Expression final;
switch (ht)
{
case FactHourType.Invoice:
notOsab = Expression.Not(isOsab);
final = Expression.And(notOsab, isCustomer);
break;
case FactHourType.NotInvoice:
notOsab = Expression.Not(isOsab);
notCustomer = Expression.Not(isCustomer);
final = Expression.And(notOsab, notCustomer);
break;
case FactHourType.OSAB:
final = Expression.And(isOsab, isCustomer);
break;
case FactHourType.OsabInvoice:
final = Expression.Equal(isCustomer, Expression.Constant(true, typeof(Boolean)));
break;
case FactHourType.Total:
final = Expression.Constant(true, typeof(Boolean));
break;
default:
throw new Exception("");
}
MethodCallExpression whereCallExpression = Expression.Call(
typeof(Queryable),
"Where",
new Type[] { hours.ElementType },
hours.Expression,
Expression.Lambda<Func<Uren, bool>>(final, new ParameterExpression[] { pe })
);
IQueryable<Uren> result = hours.Provider.CreateQuery<Uren>(whereCallExpression);
return result.Sum(u => u.Uren1);
}
The idea behind this function is that it should remain queryable so that I don't switch a shipload of data to enumerable.
I managed to stay queryable until the end. In step 1 I gather the raw data. In step 2 I order the data and subsequently I page it. In step 3 the data is converted to JSon and sent to the client. It totals hours by ticket.
The problem is: I get one query for the hours for each ticket. That's hundreds of queries! That's too much...
I tried the following approach:
DataLoadOptions options = new DataLoadOptions();
options.LoadWith<Ticket>(t => t.Bezoekrapport);
options.LoadWith<Bezoekrapport>(b => b.Urens);
dc.LoadOptions = options;
Bezoekrapport is simply Dutch for 'visitreport'. When I look at the query which retrieves the tickets, I see it joins the Bezoekrapport/visitreport but not the hours which are attached to it.
A second approach I have used is manually joining the hours in LINQ, but that does not work as well.
I must do something wrong. What is the best approach here?
The following code snippets are how I retrieve the data. Upon calling toList() on strHours in the last step, I get a hailstorm of queries. I've been trying for two days to work around it but it just doesn't work... Something must be wrong in my approach or in the function TreeHours.
Step 1:
IQueryable<RelationHoursTicketItem> HoursByTicket =
from Ticket t in allTickets
let RemarkSolved = t.TicketOpmerkings.SingleOrDefault(tr => tr.IsOplossing)
let hours = t.Bezoekrapport.Urens.
Where(h =>
(dateFrom == null || h.Datum >= dateFrom)
&& (dateTo == null || h.Datum <= dateTo)
&& h.Uren1 > 0)
select new RelationHoursTicketItem
{
Date = t.DatumCreatie,
DateSolved = RemarkSolved == null ? (DateTime?)null : RemarkSolved.Datum,
Ticket = t,
Relatie = t.Relatie,
HoursOsab = hours.TreeHours(FactHourType.OSAB),
HoursInvoice = hours.TreeHours(FactHourType.Invoice),
HoursNonInvoice = hours.TreeHours(FactHourType.NotInvoice),
HoursOsabInvoice = hours.TreeHours(FactHourType.OsabInvoice),
TicketNr = t.Id,
TicketName = t.Titel,
TicketCategorie = t.TicketCategorie,
TicketPriority = t.TicketPrioriteit,
TicketRemark = RemarkSolved
};
Step 2
sort = sort ?? "TicketNr";
IQueryable<RelationHoursTicketItem> hoursByTicket = GetRelationHours(relation, dateFrom, dateTo, withBranches);
IOrderedQueryable<RelationHoursTicketItem> orderedResults;
if (dir == "ASC")
{
orderedResults = hoursByTicket.OrderBy(sort);
}
else
{
orderedResults = hoursByTicket.OrderByDescending(sort);
}
IEnumerable<RelationHoursTicketItem> pagedResults = orderedResults.Skip(start ?? 0).Take(limit ?? 25);
records = hoursByTicket.Count();
return pagedResults;
Step 3:
IEnumerable<RelationHoursTicketItem> hours = _hourReportService.GetRelationReportHours(relation, dateFrom, dateTo, metFilialen, start, limit, dir, sort, out records);
var strHours = hours.Select(h => new
{
h.TicketNr,
h.TicketName,
RelationName = h.Relatie.Naam,
h.Date,
TicketPriority = h.TicketPriority.Naam,
h.DateSolved,
TicketCategorie = h.TicketCategorie == null ? "" : h.TicketCategorie.Naam,
TicketRemark = h.TicketRemark == null ? "" : h.TicketRemark.Opmerking,
h.HoursOsab,
h.HoursInvoice,
h.HoursNonInvoice,
h.HoursOsabInvoice
});
I don't think your TreeHours extension method can be converted to SQL by LINQ in one go. So are evaluated on execution of each constructor of the row, causing a 4 calls to the database in this case per row.
I would simplfy your LINQ query to return you the raw data from SQL, using a simple JOIN to get all tickets and there hours. I would then group and filter the Hours by type in memory. Otherwise, if you really need to perform your operations in SQL then look at the CompiledQuery.Compile method. This should be able to handle not making a query per row. I'm not sure you'd get the switch in there but you may be able to convert it using the ?: operator.