NestJS TypeORM - Why does a Unique constraint in an entity throw an exception and crash the app - unique

I have an entity described as
export class TeamEntity {
#PrimaryGeneratedColumn('uuid')
readonly id: string;
#CreateDateColumn()
readonly createdAt: Date;
#UpdateDateColumn()
updatedAt: Date;
#Column({ unique: true })
#Unique('Duplicate name', ['name'])
name: string;
}
When trying to add a record for the first time to the database the record is inserted as expected. When that same record is attempted to be inserted again, the unique constraint is not met and as expected the record is not inserted.
The problem is that the whole app crashes when this constraint is not met and I have to restart the app in order to get it back online again.
A strange exception to this rule is that when a console.log() is present anywhere in the execution path the application does not crash and the correct error message is thrown and the app continues.
Is there a reason the app doesn't crash when there is a console.log() present? And if so, how do I stop the app from crashing when I remove this line?

You can use Insert ignore for this issue
await getManager()
.createQueryBuilder()
.insert()
.orIgnore(true)
.into(TeamEntity)
.values({name: 'John'})
.execute();

Related

Sequelize Update/save has no effect on Table row

We have a MySql DB on AWS, and we are using Lambdas with CDK, built on Typescript.
We have many CRUD operations across various lambdas.
However, on one lambda, the update of a row on a Table is not working.
An integration test run on a local DB works fine, and the affected rows give expected results. But running on AWS for this use case, calling the exact method that was integration tested, simply has no effect. The operation returns 0 affected rows, and no ValidationError is thrown, despite the transaction having changed data fields. We have tried using both the sequelize update method, and the save method. One interesting aspect is that the update query works fine if we update an old field. For the brand new fields, the update does not work. The DB migrations for new fields were run identically for the local integration test DB and the AWS DB.
The team cannot work out the issue.
Any ideas?
Code example:
async function updateMyValuesInUserDb(myParam: string, user: UserDbModel): Promise<void> {
try {
const value1 = getValue1(myParam)
const value2 = getValue2(myParam)
user.myNewField1 = value1
user.myNewField2 = value2
await user.save()
}
Model:
#Table({
tableName: 'User',
freezeTableName: true
})
export class UserDbModel extends Model {
#PrimaryKey
#AutoIncrement
#Unique
#Column
id!: number
#Column
username: string
#Column
firstName: string
#Column
lastName: string
#IsEmail
#Unique
#Column
email: string
#Default(false)
#Column
myNewField1: boolean
#AllowNull(true)
#Default(null)
#Column
myNewField2: number
#UpdatedAt
#Column
updatedAt: Date
}
I suspect that the getValue1() and the getValue2() functions are asynchronous calls and need to have the await keyword applied so that the runtime will wait for the Promises to resolve.
The issue is that although the integration tests were using the latest User model, the deployed code onto AWS was using an outdated User model, due to a package being out of date in package.json. We use symlinks to connect packages, and the deploy from local should factor in the symlink. Unfortunately, the symlink only worked for the integration test. We could have discovered this earlier if we had opened up the deployed/built code, and searched for the new fields.

Object modeling and JSON responses

I'm trying to implement a web application with a framework which lets you implement both API and user interface (NextJS). And I'm not sure what the best practices for modeling and Json responses are.
Considerations
In the database the models are stored using foreign keys to reference other objects. There are many-to-many, one-to-many and on-to-one relationships.
Currently using a programming language that allows interface declaration (TypeScript).
The interface declarations can be accessed from both front and back ends.
When parsing a model, which has a foreign key, from the database in the backend (API) I use an instance of an interface which has the foreign key declared as a string.
To the front-end I want to return JSON with nested objects. This is, instead of giving a reference to the object in the foreign key, I want to embed an already retrieved object from the database.
The problem
If I want to archive this pattern, I'm forced to declare two interfaces: one with a simple shallow reference (foreign key) and another that allows an ambed object.
Example:
Say I have a Blog Post model: Post:
interface Post {
id: string;
creatorId: string;
}
And another User model: User:
interface User {
id: string;
name: string;
}
So for parsing from the database I'm using the Post interface which represents exactly how it's stored in the database.
But if I want to return to the frontend the nested object I should rely on a model similar to:
interface PostWithNested {
id: string;
creator : User;
}
where the JSON of this interface will be something like:
{
"id": "X",
"creator": {
"id" : "Y",
"name": "Z";
},
}
Example of the API where the parsing from the database must be done using an interface:
...
const postsRepository = getRepository(Post); // has shallow reference
const singlePost : Post = await postsRepository.findById(X);
// Adding nested object
const postWithNested : PostWithNested = {
id : singlePost.id,
user : userInstance,
}
res.status(200).json({post : postWithNested }
So is there a workaround for not declaring two interfaces that are essentially equal but differ on the reference to its related object?
You can create an interface Post with an optional creator. For example
interface User {
id: string;
name: string;
}
interface Post {
id: string;
creatorId: string;
creator?: User; //it will be not populated, if it has no data
}
Whenever you receive creator data from API, it will be populated automatically. Otherwise, it will be unset always in your Post data. You can verify logs here.
Here is how we integrate it into your code
const postsRepository = getRepository(Post);
const singlePost : Post = await postsRepository.findById(X);
res.status(200).json({post : singlePost })
Similarly, if you have one-many relationship. For example, 1 user has many posts, you will have this interface structure
interface User {
id: string;
name: string;
posts?: Post[]; //can keep many posts belong to this user
}
interface Post {
id: string;
creatorId: string;
creator?: User;
}

Gracefully closing connection of DB using TypeORM in NestJs

So, before I go deep in the problem let me explain you the basic of my app.
I have connection to DB(TypeOrm), Kafka(kafkajs) in my app.
My app is the Consumer of 1 topic which:
Gets some data in the callback handler, and puts that data in one table using TypeORM Entity
Maintains the Global map (in some Singleton Instance of a class) with some id (that I get in data of point 1).
At the time of app getting shutdown, my task is:
Disconnect all the consumers of the topics (this service is connected to) from the Kafka
Traverse the Global Map (point 2) and repark the message in the some topic
Disconnect the DB connections using the close method.
Here are some piece of code that might help you understand how I added the life cycle events on Server in NestJs.
system.server.life.cycle.events.ts
#Injectable()
export class SystemServerLifeCycleEventsShared implements BeforeApplicationShutdown {
constructor(#Inject(WINSTON_MODULE_PROVIDER) private readonly logger: Logger, private readonly someService: SomeService) {}
async beforeApplicationShutdown(signal: string) {
const [err] = await this.someService.handleAbruptEnding();
if (err) this.logger.info(`beforeApplicationShutdown, error::: ${JSON.stringify(err)}`);
this.logger.info(`beforeApplicationShutdown, signal ${signal}`);
}
}
some.service.ts
export class SomeService {
constructor(private readonly kafkaConnector: KafkaConnector, private readonly postgresConnector: PostgresConnector) {}
public async handleAbruptEnding(): Promise<any> {
await this.kafkaConnector.disconnectAllConsumers();
for(READ_FROM_GLOBAL_STORE) {
await this.kafkaConnector.function.call.to.repark.the.message();
}
await this.postgresConnector.disconnectAllConnections();
return true;
}
}
postgres.connector.ts
export class PostgresConnector {
private connectionManager: ConnectionManager;
constructor () {
this.connectionManager = getConnectionManager();
}
public async disconnectAllConnections(): Promise<void[]> {
const connectionClosePromises: Promise<void> = [];
connectionManager.connections?.forEach((connection) => {
if (connection.isConnected) connectionClosePromises.push(connection.close());
});
return Promise.all(connectionClosePromises);
}
}
ConnectionManager& getConnectionManager() imported from TypeORM module.
Now here are some unusual exceptions / behavior I am facing:
Disconnect all connections is throwing exception/error as in quote:
ERROR [TypeOrmModule] Cannot execute operation on "default" connection because connection is not yet established.
If connection is not yet established then how come my isConnected came true inside of if. I am not getting any clue anywhere how is this possible. And how to do graceful shutdown of the connection in TypeORM.
Do we really need to handle the closure of the connection in TypeORM or it internally handles it.
Even if, TypeORM handles the connection closure internally, how could we achieve it explicitly.
Is there any callback that can be triggered in case the connection is disconnected properly so that I am sure, that disconnection actually happened from the db.
Some of the messages are coming after I press CTRL + C (mimicking the abrupt/closure of the process of my server) and the control comes back to Terminal. This means, some thread is coming back after the handle returns to my terminal (🤷, no clue, how would I handle this, since if you see, my handleAbruptHandling is awaited and also, I cross checked all the promises are being awaited properly.)
Some of the things to know:
I properly added my module to create the hooks of server life cycle events.
Injected the objects in almost all the classes properly.
Not getting any DI issue from NEST and server is getting started properly.
Please shed some light and let me know how can I gracefully disconnect from db using typeorm api inside NestJs in case of abrupt closure.
Thanks in advance and happy coding :)
Littlebit late but may help someone..
You are missing the param keepConnectionAlive as true in TypeOrmModuleOptions, typeOrm dont keep connections alive as default. I set keepConnectionAlive as false, if a transaction keeps the connection open im going to close the connection (typeorm wait until the transaction or other process finish before close the connection), this is my implementation
import { Logger, Injectable, OnApplicationShutdown } from '#nestjs/common';
import { getConnectionManager } from 'typeorm';
#Injectable()
export class LifecyclesService implements OnApplicationShutdown {
private readonly logger = new Logger();
onApplicationShutdown(signal: string) {
this.logger.warn('SIGNTERM: ', signal);
this.closeDBConnection();
}
closeDBConnection() {
const conn = getConnectionManager().get();
if (conn.isConnected) {
conn
.close()
.then(() => {
this.logger.log('DB conn closed');
})
.catch((err: any) => {
this.logger.error('Error clossing conn to DB, ', err);
});
} else {
this.logger.log('DB conn already closed.');
}
}
}
I discovered some TypeORM docs saying "Disconnection (closing all connections in the pool) is made when close is called"
Here: https://typeorm.biunav.com/en/connection.html#what-is-connection
I tried export const AppDataSource = new DataSource({ // details }) and importing it and doing
import { AppDataSource } from "../../src/db/data-source";
function closeConnection() {
console.log("Closing connection to db");
// AppDataSource.close(); // said "deprecated - use destroy() instead"
AppDataSource.destroy(); // hence I did this
}
export default closeConnection;
Maybe this will save someone some time

Proper way to define the date field of my typescript interface that will be loaded from my cloud function?

Background: I'm using Vue and am interacting with cloud functions in order to retrieve data from firestore (I'm not querying the firestore directly). I'm calling a function where one of the fields returns the internal representation of firestore's timestamp (seconds, nanoseconds). Also, I'm totally new to typescript/Vue, so apologies if I'm missing something obvious.
I defined a model class like such:
<script lang="ts">
interface Order {
order_id: number
uid: string
create_date: Date //{ _seconds: number; _nanoseconds: number } ... from Firestore timestamp
}
export default Vue.extend({
data() {
return {
userOrderInfo: [] as Order[],
}
},
methods: {
getUserOrders() {
userOrders({ uid: this.$store.state.uid })
.then(result => {
const orderList: Order[] = JSON.parse(JSON.stringify(result.data))
const filteredOrders: Order[] = []
orderList.forEach(o => {
filteredOrders.push(o)
const tempCreateDate = new firebase.firestore.Timestamp(
o.create_date._seconds,
o.create_date._nanoseconds,
o.create_date = tempCreateDate.toDate()
}
})
I'm having a problem with how I'm parsing and storing the create_date.
It seems ugly to me and I feel like there's probably a more elegant way to get it. It seems crude to have to get the nanoseconds and seconds and then recreate the Timestamp to call toDate().
VSCode is giving me this error, but everything seems to run fine.
"Property '_seconds' does not exist on type 'Date'.Vetur(2339)"
It doesn't like how I defined _seconds and _nanoseconds in the data() definition.
Is there a better way to define my date and retrieve it?
o.create_date = tempCreateDate.toDate() is giving me this warning:
Identifier 'create_date' is not in camel case.eslint#typescript-eslint/camelcase
The problem is the cloud function is returning create_date in this format, so I can't make it camel case.
Is there any solution for this besides enduring this warning?
I'll just address the Typescript part because I haven't used Firebase and don't know how you could refactor the result of Firebase.
If your create_date has the following structure:
{
_seconds: number;
_nanoseconds: number;
}
you can define that as an interface.
<script lang="ts">
interface Order {
order_id: number;
uid: string;
create_date: ResponseDate;
}
// please choose a more fitting name :)
interface ResponseDate {
_seconds: number;
_nanoseconds: number;
}
This should get rid of your VSCode TypeScript error because Typescript now properly understands what create_date is.
Now if you want to later update o.create_date to an actual Date, you need to add that to your interface-definition.
<script lang="ts">
interface Order {
order_id: number;
uid: string;
create_date: ResponseDate | Date; // Can be a ResponseDate or a Date
}
interface ResponseDate {
_seconds: number;
_nanoseconds: number;
}

Grails Scaffold - Generated MySQL Column Type for JodaTime DateTime field

I am new to Grails and am using Grails 2.1 with a MySQL 5.5 backend to build a sample project to learn.
I installed JodaTime 1.4 Plug-in and then ran grails install-joda-time-templates
However, when I declared a Domain Class field to be of type org.joda.time.DateTime, I got an error when attempting to save a new entry.
In order to isolate the problem, I created a simple Domain Class:
import org.joda.time.DateTime
class Project
{
String name
DateTime startDate
static constraints = {
name(blank: false, maxSize: 50)
startDate(validator: {return (it > new DateTime())})
}
}
The controller just sets scaffold to use the Domain Class.
My DataSource.groovy specifies dbCreate = "create-drop", as I am letting the tables get created by Grails.
Here is the error I get when I try to save:
Class:com.mysql.jdbc.MysqlDataTruncation
Message:Data truncation: Data too long for column 'start_date' at row 1
When I look at project.start_date column in the MySQL database that Grails created, the type is TINYBLOB.
My thought is that TINYBLOB may not be sufficient to store the data for a JodaTime DateTime field.
Does anyone know how I can make Grails create an appropriate type?
Thank you very much.
In your Config.groovy:
grails.gorm.default.mapping = {
"user-type" type: PersistentDateTime, class: DateTime
"user-type" type: PersistentLocalDate, class: LocalDate
}
And your mapping closure:
static mapping = {
startDate type: PersistentDateTime
}
Take a look at this post for more info, see if it helps.
What I did to make it work (Grails 2.1):
1) add to buildConfig:
compile "org.jadira.usertype:usertype.jodatime:1.9"
2) refresh the dependencies
3) run this command to add the user-type supported:
grails install-joda-time-gorm-mappings
4) finally in the domain class:
import org.jadira.usertype.dateandtime.joda.*
static mapping = {
column_name type: PersistentDateTime
}
Documentation was found here: persistence