Phpmyadmin Import Data - mysql

I have a Mysql table that looks like this:
+-------+
| NAME |
+-------+
| James |
| Alex |
| Jones |
| ... |
+-------+
Each name is unique.
And I have a txt file that is a list of names that needs to be imported into this table.
The list needs to be imported keeping the order of the names, but when I use Phpmyadmin to import it, the list seems to get sorted by name prior to being imported.
How can I prevent this behavior? I just need it to be imported as is, without any change. And when I query it should return the results at the same order I inserted them.

by default the data being returned from mysql are ordered prior to an indexed column (phpmyadmin has nothing to do with the ordering) if you define an index on the table using the name column the table results will be sorted according to the index of this column but if you want to order them prior to the last insert you have, order them by ID only if you set the ID as an auto_increment column in the design cuase it will automatically increment the values by 1 for the newly inserted rows. :)

Give each record an ID before importing, then you can sort by ID to get original layout

Related

Delete partial duplicate rows

I have a Dataverse table that has a few columns. One of those columns is an Order Number column. There should only be one row per order number. If there is more than 1, only the first one should be kept. How can I do this in Power Automate?
What I have tried so far: First, I created an array of all the order numbers. From there, I feel stuck. I started to add an Apply to Each action, loop through the table, count how many of each order number there are, but then I confused myself and didn't think that was the right way to go.
Or...is there a way to keep the "duplicate" rows from getting added to the Dataverse table in the first place? The data is getting loaded into the table via a JSON load. Is there a way to delete the "duplicate" items from the JSON?
Here's an example of the situation:
| OrderNumber | OrderDate | CustomerName |
| 450123| 2-24-22 | Business A |
| 450123| 2-25-22 | Business A |
| 383238| 2-24-22 | Business B |

Best way to handle duplicated rows

I have insurance companies "dictionary" in my database, let's say:
+----+-------------------+----------+
| ID | Name | Data |
+----+-------------------+----------+
| 1 | InsuranceCompany1 | SomeData |
+----+-------------------+----------+
But I'm fetching data from another system, and in result I got duplicates of insurance companies, but without my data:
+----+-------------------+----------+
| ID | Name | Data |
+----+-------------------+----------+
| 1 | InsuranceCompany1 | SomeData |
+----+-------------------+----------+
| 2 | InsuranceCompany1 | |
+----+-------------------+----------+
Both records are related in variety of models but they refer to the same data, and what I want is to pair these records without changing queries or data in other tables, so noone knows there are two records, but both refer to one instance which is
+----+-------------------+----------+
| 1 | InsuranceCompany1 | SomeData |
+----+-------------------+----------+
My question is: Is there some proper way to handle situations like this?
I've came up with solution which is to add parent_id column, and manually set parent_id in duplicated rows, and then override Eloquent methods like find in a model to return parent if there is parent_id set.
Copying SomeData column is not an option because there can be condition if insurance_company_id == id;
You can try creating a view of your dict table something like this:
CREATE VIEW unique_dict AS
SELECT MIN(ID) ID,
Name,
GROUP_CONCAT(Data) Data
FROM dict
GROUP BY Name
That will give you one row per name.
Then, in your queries requiring one row per name, SELECT from the unique_dict view rather than the dict table.
GROUP_CONCAT() yields a list of values from Data, which helps if more than one duplicated row contains a value: you get them all.
Longer term you might be smart to consider these duplicates to be "dirty data", and clean them up as you INSERT new rows. How to do that?
Create a unique index on Name.
CREATE UNIQUE INDEX unique_name ON dict(Name);
Then, when loading new data into dict use Eloquent's updateOrCreate() function. Here's something to read about that. Laravel 5.1 Create or Update on Duplicate

Find existence of a record in MySQL table with input data as a list

I have a list of ids in text format as a comma separated value like so
("12345", "12346", "12347", etc, etc)
I would like to find their existence or non existence from a table say devices table which has a column called device ids (not primary key)
Ideally i would like to get a list which says if each item exists or not.
So far I have tried to get the query of those that exist and I have to manually find the non existing ones.
Is there a for loop I have to run on stored procedures or something like that. Please help.
Table structure
<pre>
| id | device_id | device_name |
+------+-----------------+---------------+
| 71 | 352701060409650 | 57X |
| 13 | 352701060409700 | 582 |
</pre>
You need to create a query with left join to the same table with 'IFNULL' condition. There already has been a post for this topic. Please check this out here.

How to generate a unique id based on different id category?

I have a table as shown below
| id | name | doc_no |
|:-----------|------------:|:------------:|
| 1 | abc | D11710001
| 2 | efg | D21710001
| 3 | hij | D31710001
| 4 | klm | D41710001
| 5 | nop | D51710001
| 1 | qrs | D11710002
I want to generate an unique id based on the id given. For example, when i have item to be stored in this table, it will generate an unique id based on the id of the table.
Note: The id in this table is a foreign key. The doc no can be modified by user into their own format manually.
The id format - D 'id' 'year' 'month' 0001(auto increment)
How can i write the sql to generate unique id during storing data?
Continuing with the comment by #strawberry I might recommend not storing the ID in your database. Besides the fact that accessing the auto increment ID at the same time you are inserting the record might be tricky, storing this generated ID would be duplicating the information already stored elsewhere in your table. Instead of storing your ID, just generate it when you query, e.g.
SELECT
id, name, doc_no,
CONCAT('D', id, STR_TO_DATE(date, '%Y-%m'), auto_id) AS unique_id
FROM yourTable;
This assumes that you would be storing the insertion date of each record in a date column called date. It also assumes that your table has an auto increment column called auto_id. Note that having the date of insertion stored may be useful to you in other ways, e.g. if you want to search for data in your table based on date or time.
You could create Trigger and update the column or you can write the update state just after your INSERT
insert into <YOUR_TABLE>(NAME,DOC_NO) values('hello','dummy');
update <YOUR_TABLE> set DOC_NO=CONCAT('D',
CAST(YEAR(NOW()) AS CHAR(4)),
CAST(MONTH(NOW()) AS CHAR(4)),
LAST_INSERT_ID())
WHERE id=LAST_INSERT_ID();
Please note, as above SQL may cause race condition, when simultaneously server get multiple requests.
#Tim Biegeleisen has good point though, as it is better to construct the id when you are SELECTing the data.

How to get the right "version" of a database entry?

Update: Question refined, I still need help!
I have the following table structure:
table reports:
ID | time | title | (extra columns)
1 | 1364762762 | xxx | ...
Multiple object tables that have the following structure
ID | objectID | time | title | (extra columns)
1 | 1 | 1222222222 | ... | ...
2 | 2 | 1333333333 | ... | ...
3 | 3 | 1444444444 | ... | ...
4 | 1 | 1555555555 | ... | ...
In the object tables, on an object update a new version with the same objectID is inserted, so that the old versions are still available. For example see the entries with objectID = 1
In the reports table, a report is inserted but never updated/edited.
What I want to be able to do is the following:
For each entry in my reports table, I want to be able to query the state of all objects, like they were, when the report was created.
For example lets look at the sample report above with ID 1. At the time it was created (see the time column), the current version of objectID 1 was the entry with ID 1 (entry ID 4 did not exist at that point).
ObjectID 2 also existed with it's current version with entry ID 2.
I am not sure how to achieve this.
I could use a query that selects the object versions by the time column:
SELECT *
FROM (
SELECT *
FROM objects
WHERE time < [reportTime]
ORDER BY time DESC
)
GROUP BY objectID
Lets not talk about the performance of this query, it is just to make clear what I want to do. My problem is the comparison of the time columns. I think this is no good way to make sure that I got the right object versions, because the system time may change "for any reason" and the time column would then have wrong data in it, which would lead to wrong results.
What would be another way to do so?
I thought about not using a time column for this, but instead a GLOBAL incremental value that I know the insertion order across the database tables.
If you are interting new versions of the object, and your problem is the time column(I assume you are using this column to sort which one is newer); I suggest you to use an auto-incremental ID column for the versions. Eventually, even if the time value is not reliable for you, the ID will be.Since it is always increasing. So higher ID, newer version.