Good afternoon, I am creating an application in ionic using angular where currently it handles two connection modes: connected if the user has internet access and disconnected if they do not have it
Currently I have a feature and it is that the user has the possibility to sync data and apply in SQLite
like this JSON (to syncronize)
{
countries: [/* 200+ records */],
vaccine: [/* 3000+ records */],
... other 20 keys
}
and the SQLite structure(mobile) has a table per key like this
CREATE TABLE IF NOT EXISTS countries(
id INTEGER PRIMARY KEY,
code TEXT NOT NULL,
name TEXT NULL,
--- other columns
);
CREATE TABLE IF NOT EXISTS vaccine(
id INTEGER PRIMARY KEY,
name TEXT NULL,
--- other columns
);
How can I execute the synchronization process without executing sql per cycle? but validating that if the row exists, update it
public descargar(clave): void {
this.descargarParametricas().subscribe(
parametros => {
// make process by key
Object.keys(parametros).forEach(
index => this.descargarParametrica(index, parametros[index]) : void(0)
);
}
);
}
public descargarParametrica( key: string, parametros: any[] ){
parametros.forEach( item => {
const headers = Object.keys(item);
const rows = Object.values(item).map( item => item === '' ? 'NULL' : item );
const sqlQuery = `INSERT INTO ${key}(${headers.join()}) VALUES (${rows.join()});`;
// execute query per loop is too slow, only countries data(200 records) affect performance and stop
this.datalayer.execute(sqlQuery);
})
}
this.descargarParametricas() return a JSON 40kb size, the problem is that the insertion process must be done by the key of the json and update if the row exist, actually have a performance issues because the insertion/update process execute row per row
thanks for all
Related
I have 4 models/tables called Data, Metadata, MetadataOption and DataBank.
About Data
Data model is simply a model that stores a "value", and belongs to a certain DataBank:
datas:
id - int
databank_id - int
value - varchar
(timestamps, etc)
About DataBank
DataBank simply contains all the Datas from a certain source.
databanks:
id - int
source_id - int
(timestamps, etc)
About Metadata
Metadata model stores all the Data related attributes, in simple form. These could be anything, e.g. the year the data was taken, the data owner name, etc.
metadatas:
id - int
name - varchar
type - shortInt/enum
(timestamps, etc)
Metadatas are divided into 2 types: with fixed values (like enums), and non-fixed ones. An example of meta with fixed values is "Month", which consist of 12 month names: "January"..."December". These fixed values are stored in metadata_options, which means metadata HAS MANY metadata_options.
metadata_options:
id - int
metadata_id - int
label - varchar
(timestamps, etc)
The values of Metadatas of a certain Data are stored in a pivot table data_metadata, either on the metadata_value or metadata_option_id column, these columns is nullable, but is assured not null on both columns (for each record).
data_metadata:
data_id - int
metadata_id - int
metadata_value - varchar
metadata_option_id - int
Thats all about the models/tables. Now...
The Problem
I'm trying to filter Datas (taken from a certain DataBank) by it's Metadata values.
After some reading on Laravel official docs and some online sources, I've come with a solution like this:
Data.php
public function databank()
{
return $this->belongsTo(DataBank::class);
}
public function metadatas()
{
return $this->belongsToMany(Metadata::class, 'data_metadata')->withPivot('metadata_value', 'metadata_option_id');
}
DataBank.php
public function datas()
{
return $this->hasMany(Data::class);
}
public function datasFiltered($metaValuePairs = []): Builder|HasMany
{
$qq = $this->datas();
if (! empty($metaValuePairs)) {
foreach ($metaValuePairs as $metaId => $metaVal) {
$qq = $qq->whereMetadataValue($metaId, $metaVal);
}
}
return $qq;
}
DataBuilder.php (custom builder for Data model)
public function whereMetadataValue($metaId, $metaVal): Builder
{
return $this->whereHas('metadatas', function ($query) use ($metaId, $metaVal) {
return $query
->where(
function ($query) use ($metaVal) {
return $query
->where(
function ($query) use ($metaVal) {
return $query->whereIn('metadatas.type', [
MetadataTypeEnum::Ordinal,
MetadataTypeEnum::Nominal
])->where('data_metadata.metadata_option_id', $metaVal);
}
)
->orWhere('data_metadata.metadata_value', $metaVal);
}
)
->where('metadatas.id', $metaId);
});
}
===
So, using the DataBank's datasFiltered() with this input:
// let's say a "Year" metadata has an id of 1,
// and "State Name" metadata has an id of 2
$filters = [
1 => '2022',
2 => 'Ohio'
]
I expect the returned Datas only contains data which was recorded in 2022 AND was taken from state Ohio. But instead, I got empty result...
Thanks in advances!
i'm trying to use Dbsession to track user' activity and i got everything set and running according to yii documentation, but when a user load a page multiple session record was saved in the database in one request. image below shows the data in the database what is the cause of this and any solution to fix this?
In my config file i have this
'session' => [
// this is the name of the session cookie used for login on the frontend
//'name' => 'advanced-frontend',
'class' => 'yii\web\DbSession',
'writeCallback' => function ($session) {
return [
'user_id' => \Yii::$app->user->id,
'ip' => \Yii::$app->clientip->get_ip_address(),
];
},
],
First column (id) is primary key and should be unique (it is declared in this way in migration). You have probably messed something with table schema - you should not be able to save 3 records with the same ID. DbSession is using upsert() and relies on uniqueness of id column.
Make sure that id column is primary key, or at least have UNIQUE constraint.
I'm scraping a shoutbox which is limited to 10 messages; it's asynchronous and when the 11th item appears the first one is gone.
I set up a puppeteer, it scrapes the structure correctly as an array, which I dump to mongodb. The easiest way automating this I came up with is running script with the watch command and static interval.
The question is how to skip duplicates items in log, items shouldn't be unique, just don't dump the same twice. And there's probably a better way to cycle this process.attached screenshot
You can use db.collection.distinct() in MongoDB to obtain the distinct messages from your database:
db.messages.distinct( 'message' );
Alternatively, you can use db.collection.createIndex() to create a unique index in your database so that the collection will not accept insertion or update of a document where the index key value matches an existing value in the index:
db.messages.createIndex( { 'message' : 1 }, { 'unique' : true } );
In your Puppeteer script, you can use page.evaluate() in conjunction with the Set object to obtain distinct messages from the web page that you are scraping:
const distinct_messages = await page.evaluate( () => new Set( Array.from( document.querySelectorAll( '.message' ), e => e.textContent ) ) );
If I create a entry in a database such as this (cvmCasefile has all info needed to create the casefile):
Casefile casefile = cvm.Casefile;
casefile.ClientId = cvm.Client.ClientId;
casefile.DateSubmitted = DateTime.Now;
db.Casefiles.Add(casefile);
db.SaveChanges();
Immediately after the save call I try to retrieve this entries ID number from the database with:
int casefileId = db.Casefiles.Where(u => u.UserProfileId == casefile.UserProfileId)
.Where(c => c.ClientId == casefile.ClientId)
.Single(d => d.DateSubmitted == casefile.DateSubmitted).CasefileId;
This returns null when it is executed. I've stepped through the program and all casefile values are populated and the database has the required row inserted with a valid ID#. Is there an easier way to get the ID from the database or where did is screw-up the call to the database?
Think you may just do
var id = casefile.CasefileId;
after the SaveChanges
Is there a performant way to fetch the list of foreign keys assigned to a MySQL table?
Querying the information schema with
SELECT
`column_name`,
`referenced_table_schema` AS foreign_db,
`referenced_table_name` AS foreign_table,
`referenced_column_name` AS foreign_column
FROM
`information_schema`.`KEY_COLUMN_USAGE`
WHERE
`constraint_schema` = SCHEMA()
AND
`table_name` = 'your-table-name-here'
AND
`referenced_column_name` IS NOT NULL
ORDER BY
`column_name`;
works, but is painfully slow on the versions of MySQL I've tried it with. A bit of research turned up this bug, which seems to indicate it's an ongoing issue without a clear solution. The solutions which are hinted at require reconfiguring or recompiling mysql with a patch, which doesn't work for the project I'm working on.
I realize it's possible to issue the following
SHOW CREATE TABLE table_name;
and get a string representation of a CREATE TABLE statement, which will include the foreign key constraints. However, parsing this string seems like it would be fragile, and I don't have a large corpus of CREATE TABLE statements to test against. (if there's a standard bit of parsing code for this out there, I'd love some links)
I also realize I can list the indexes with the following
SHOW CREATE TABLE table_name;
The list of indexes will include the foreign keys, but there doesn't appear to be a way to determine which of the indexes are foreign keys, and which are "regular" MySQL indexes. Again, some cross referencing with the SHOW CREATE table information could help here, but that brings us back to fragile string parsing.
Any help, or even links to other smart discussions on the issue, would be appreciated.
SequelPro and Magento both utilize the SHOW CREATE TABLE query to load the foreign key information. Magento's implementation is the one I am going to reference since it's both a PHP based system and one that both of us are very familiar with. However, the following code snippets can be applied to any PHP based system.
The parsing is done in the Varien_Db_Adapter_Pdo_Mysql::getForeignKeys() method (the code for this class can be found here) using a relatively simple RegEx:
$createSql = $this->getCreateTable($tableName, $schemaName);
// collect CONSTRAINT
$regExp = '#,\s+CONSTRAINT `([^`]*)` FOREIGN KEY \(`([^`]*)`\) '
. 'REFERENCES (`[^`]*\.)?`([^`]*)` \(`([^`]*)`\)'
. '( ON DELETE (RESTRICT|CASCADE|SET NULL|NO ACTION))?'
. '( ON UPDATE (RESTRICT|CASCADE|SET NULL|NO ACTION))?#';
$matches = array();
preg_match_all($regExp, $createSql, $matches, PREG_SET_ORDER);
foreach ($matches as $match) {
$ddl[strtoupper($match[1])] = array(
'FK_NAME' => $match[1],
'SCHEMA_NAME' => $schemaName,
'TABLE_NAME' => $tableName,
'COLUMN_NAME' => $match[2],
'REF_SHEMA_NAME' => isset($match[3]) ? $match[3] : $schemaName,
'REF_TABLE_NAME' => $match[4],
'REF_COLUMN_NAME' => $match[5],
'ON_DELETE' => isset($match[6]) ? $match[7] : '',
'ON_UPDATE' => isset($match[8]) ? $match[9] : ''
);
}
In the doc block it describes the resulting array as follows:
/**
* The return value is an associative array keyed by the UPPERCASE foreign key,
* as returned by the RDBMS.
*
* The value of each array element is an associative array
* with the following keys:
*
* FK_NAME => string; original foreign key name
* SCHEMA_NAME => string; name of database or schema
* TABLE_NAME => string;
* COLUMN_NAME => string; column name
* REF_SCHEMA_NAME => string; name of reference database or schema
* REF_TABLE_NAME => string; reference table name
* REF_COLUMN_NAME => string; reference column name
* ON_DELETE => string; action type on delete row
* ON_UPDATE => string; action type on update row
*/
I know it's not exactly what you were asking for since it's using the SHOW CREATE TABLE output, but based on my findings, it seems to be the generally accepted way of doing things.