How to do two joins and execute a single active record query using ActiveRecord? - mysql

I'm using MySQL and I'm trying to create a couple of scopes to find a User based on a set of conversations, users and conversations have a polymorphic association that allows the user to author and receives conversations.
The "with_conversations" scope is doing what I want, however, it's not very efficient, I'm having trouble making it execute a single query instead of the addition 2 arrays, which triggers 2 separate queries.
class User
scope :with_received_conversations, -> (conversations) { joins(:received_conversations).where(conversations: { id: conversations, receiver_type: "User" }) }
scope :with_authored_conversations, -> (conversations) { joins(:authored_conversations).where(conversations: { id: conversations, author_type: "User" }) }
scope :with_conversations, -> (conversations) { with_authored_conversations(conversations) | with_received_conversations(conversations) }
end
I attempted using the "or()" active record function but it returns the following error:
*** ArgumentError Exception: Relation passed to #or must be structurally compatible. Incompatible values: [:joins]
nil

I'm not allowed to comment so I am sorry for creating an answer. I think that this post is helpful.
Relation passed to #or must be structurally compatible. Incompatible values: [:references]

Related

How to fix Slick Exception of single AutoInc column returned on INSERT

I tried to implement the akka-http rest example provided at
https://github.com/ArchDev/akka-http-rest
but I'm stuck with the
slick.SlickException: This DBMS allows only a single column to be returned from an INSERT, and that column must be an AutoInc column.
at slick.jdbc.JdbcStatementBuilderComponent$JdbcCompiledInsert.buildReturnColumns(JdbcStatementBuilderComponent.scala:67)
Here is the Scala Code:
Signup API:
path("signUp") {
pathEndOrSingleSlash {
post {
entity(as[UsernamePasswordEmail]) { userEntity =>
complete(Created -> signUp(userEntity.username, userEntity.email, userEntity.password))
}
}
}
}
AuthService.scala
def signUp(login: String, email: String, password: String): Future[AuthToken] =
authDataStorage
.saveAuthData(AuthData(UUID.randomUUID().toString, login, email, password.sha256.hex))
.map(authData => encodeToken(authData.id))
AuthDataStorage.scala
...
override def saveAuthData(authData: AuthData): Future[AuthData] =
db.run((auth returning auth).insertOrUpdate(authData)).map(_ => authData)
...
Since I'm new to Scala and Slick, can anyway provide the information why this exception is occurring even though I've defined O.AutoInc in Model. I'm using MySQL RDBMS
The problem is with returning auth. Instead of returning auth i.e complete object, Just return the auto-increment Id id. Slick does not support returning the complete object, though it compiles correctly. It does not generate a valid sql query.
Once you can get access to the auto-increment id then you can build the AuthData using the argument of the function.
Code:
(auth returning auth.map(_.id)).insertOrUpdate(authData)).map(id => authData.copy(id = id))
The exception is the result of a MySQL behavior. As the Slick documentation states:
Many database systems only allow a single column to be returned which must be the table’s auto-incrementing primary key. If you ask for other columns a SlickException is thrown at runtime (unless the database actually supports it).
Change the saveAuthData method to return the id column on an upsert:
override def saveAuthData(authData: AuthData): Future[AuthData] =
db.run((auth returning auth.map(_.id)).insertOrUpdate(authData))
.map(idFromDb => authData.copy(id = idFromDb.getOrElse(authData.id)))
In the above code, idFromDb is a Some[Int] for an insert and a None for an update.

How to remove a row in a joined table?

I have a table called LocalEventSessions. It has 2 columns, localevent_id and session_id.
In my LocalEventSessionRepository I have this code:
List<LocalEventSession> findAllByLocalEventSessionId_localEventId(Integer id);
This returns a list of all sessions belonging to a LocalEvent. This is used to display the sessions.
I would like to remove a session relation to a localevent. So I wrote this:
Integer deleteByLocalEventIdAndSessionId(#Param("localevent_id") Integer lid, #Param("session_id") Integer sid);
I expect this to delete record in the LocalEventSession table based on the localevent_id and session_id. I can compile the code with no problem but when I deploy it with Wildfly I receive several errors. At the end of stacktrace it shows that Wildfly is unable to locate the localEventId attribute.
Caused by: java.lang.IllegalArgumentException: Unable to locate Attribute with the the given name [localEventId] on this ManagedType [net.atos.eventsite.jar.LocalEventSession]"},
"WFLYCTL0412: Required services that are not installed:" => ["jboss.undertow.deployment.default-server.default-host./beheerback"],
"WFLYCTL0180: Services with missing/unavailable dependencies" => undefined
If I deploy the code without the deleteBy method declaration it compiles and deploys without any problem.
Following the advice of JB Nizet I created a arraylist with all localeventid's and sessionid's objects. Then with a for loop I can search for the correct object like so:
for (LocalEventSession j : deleteItem) {
if (j.getSessionId() == sessionId) {
localEventSessionRepository.delete(j);
}
}
And then use the delete() method from Spring to remove the record from the table.

Thinking Sphinx - multiple field conditions within scope

Using thinking-sphinx 3.2.0.
I have scopes chained conditionally and would like to trigger ".search_for_ids" after chain is defined. Therefore, I would like to use a sphinx_scope to define conditions on multiple fields.
sphinx_scope(:for_query) do |query|
{
conditions: { title: query, description: query }
}
end
This results in the following SphinxQL (excerpt):
WHERE MATCH('#title string #description string')
But I would like it to result in
WHERE MATCH('#title string | #description string')
Is this possible within a scope? or should I resign myself and let go the scope chaining and define it as a literal string params to .search?
Thanks!
Ok, seem to have found the answer myself.
sphinx_search(:for_query) do |query|
{
conditions: { "(title,description)" => query }
}
end
which results in
WHERE MATCH('#(title,description) string')
Thanks Pat and contributors for great gem!

Doctrine2 / Zend1

I have an application written in Zend 1/Doctrine2 (2.1.5). It is important to note several important points of the application:
I have an entity "group" what can possess several types:
manual group
pending group
validate groupe
For every different type corresponds a class group inheriting from an abstracted class mother
My groups are trained to have a tree (graph theory).
I shall have to have type ManyToMany's relation and thus intermediate table SQL.
However, I need to have attributes specific in this relation.
So, I have to create a disguised relation ManyToMany: I have to create myself this intermediate. It's name is "Composition".
For the type "pending group", as its name indicates it, a user owes validated this group so that it is actual.
A group passes automatically awaiting validation if it consists of at least 5 groups.
If a group is awaiting validation it can be thus being is accepted or refused.
If a group awaiting validation is then refused it returns to its version previous, if it exists.
If a group awaiting validation is to accept, it becomes actual and can be used.
Here is my problem, when that a group becomes awaiting validation further to an addition of group, for example, the application does not preserve the old version of the group.
Indeed, the addition of this new group is allocated on one hand to the group awaiting validation but also to the group of previous type.
Here is my code:
public function putAction() {
//it's not a pending group.
$pendingGroup = $this->toPendingGroup($object);
$this->em->persist($pendingGroup);
$this->em->flush();
// send mail
}
public function toPendingGroup($groupToValidate)
{
$newGroup = new Model_Group_Manual_Pending();
$oldGroup = Serializer_Arrays::serialize($groupToValidate);
unset($oldGroup['components']);
Serializer_Arrays::unserialize($oldGroup, 'Model_Group_Manual_Pending', $newGroup);
$newGroup->setSupport(Gromit_Di::getAcl()->getAuthUser());
$compositions = array();
foreach ($groupToValidate->getComponents() as $composition)
{
if ($composition->getId() == NULL) {
$newComposition = new Model_Composition();
// Initialisation de ma composition
$compositions[] = $newComposition;
}
}
$newGroup->setComponents($compositions);
return $newGroup;
}
---
To summarize it, here is a plan of the situation:
example
- In black the version previous
- In red the new version
- In blue the not deliberate connection
Thank you for your help

Grails - MySQL query result with a zero index causing GORM runtime exception

I'm using an existing DB from an open source application (OSTicket). There are two tables (OstStaff and OstTicket, where staff hasMany Tickets). However, the big problem is OSTicket assigns a default value of zero to OstTicket.staff_id, when the ticket is not yet assigned to an OstStaff.
The issue arises when you search for tickets and GORM thinks that there is a staff with index equal to 0. I can't even check for null, just keep getting this error:
No row with the given identifier exists: [com.facilities.model.OstFacStaff#0].
Stacktrace follows:
Message: No row with the given identifier exists: [com.facilities.model.OstFacStaff#0]
Any suggestion how I can get around this issue? Thanks.
Solved this by just getting the ids, rather than the record, and then removing the zero index.
def ticketCriteria = OstFacTicket.createCriteria()
def staffIdsWithTickets = ticketCriteria.list {
projections {
distinct("staff.id") // **INSTEAD of returning staff, get the id**
}
between("created", start, end)
}
def zeroIndex = staffIdsWithTickets.indexOf(0)
if (zeroIndex >= 0) {
staffIdsWithTickets.remove(zeroIndex)
}