Storing an array (doubles) in phpMyAdmin - mysql

I'm very new to MySQL, although I've used SQL databases in other contexts before. I have a test site set up which has an online cPanel with access to phpMyAdmin. I'm attempting to setup a MySQL database, and so far it's working fine (I can connect to the Database and the table).
The only problem I'm having is with inserting data. I'd like to insert an entire array (specifically, the array will be a double[]) into one column. After looking at the column types available in phpMyAdmin, it doesn't seem to support inserting arrays other than Binary arrays.
I've found many solutions for inserting arrays programatically including this thread, but for this site we will be inserting data via the online cPanel. Is there a way to do that?

If you want access to that data, and want to be able to use the power of SQL to search in your double[], you should do it this way:
First, you should spend some time researching relational databases. They allow you to create linked data.
An important part of every relational database is using good keys. A key is a unique identifier for a row that allows you to access the data on that row in an efficient manner.
Another important part of relational databases are indexes. Indexes are not required to be unique. But are useful if you are trying to search on them (SQL has made an "index" of the table based on a column or group of columns)
If you wanted to create a table that would have a double[] array, you might instead create a 2nd table that relates to the first table by the first tables primary key.
CREATE TABLE base (
base_id INT AUTO_INCREMENT,
name VARCHAR(32),
PRIMARY KEY(base_id)
);
CREATE TABLE darray (
base_id INT,
data DOUBLE,
INDEX(base_id)
);
To get the information back out that you want, you can select using a JOIN statement. If you wanted to get all the information where the base_id was 3, you would write it like so:
SELECT * FROM base
JOIN darray ON darray.base_id = base.base_id
WHERE base.base_id = 3;
The advanced form of writing this with aliasing
SELECT * FROM base b
JOIN darray d ON d.base_id = b.base_id
WHERE b.base_id = 3;
If you don't want to have access to the data, but are just recalling it, you should do it this way: (Although this is debatable, I still recommend the above way, if you are willing to learn more sql)
I assume you will be using PHP, we will be serializing the data (see: http://php.net/manual/en/function.serialize.php)
Note we will don't have the darray table, but instead add a
data BLOB
to the base table.
Inserting with PHP serialized data
<?php
$serializedData = serialize($darray);
$result = mysql_query("INSERT INTO base (name, data) VALUES('a name', '$serializedData ')");
Getting the serialized data
<?php
$result = mysql_query("SELECT data FROM base WHERE base_id=3");
if($result && mysql_affected_rows($result) > 0) {
$serializedData = mysql_result($result, 0, 'data');
$darray = unserialize($serializedData);
}

You can import data for tables with a .sql file (basically just a file full of insertion queries) but phpMyAdmin doesn't support inserting data from arbitrary data types. If you want to insert a double[] array as multiple rows in a table, you'll need to take an approach similar to the one in the thread you linked.
(Note that you can always write such a program for the explicit purpose of generating a .sql file which you then use for deployment.)

Related

Create and Update a Single Table from SQL Backend with Multiple Tables, Using Access 2010

Good Morning All,
I'm having problem pulling the data I need from a SQL Backend and keeping it up to date.
I've got two tables, that hold the data at need. At one Point they were split due to a software update we received. First Table dbo_PT_NC Second Table dbo_PT_Task
Primary key of PT_NC is the "NCR" Field, The Task Table has its own Unique ID, But the PT_Task.TaskTypeID field is linked to the "NCR" field
SELECT dbo_PT_Task.TaskTypeID,
dbo_PT_NC.NCR,
dbo_PT_NC.NCR_Date,
dbo_PT_NC.NC_type,
dbo_PT_NC.Customer,
dbo_PT_NC.Material,
dbo_PT_NC.Rev,
dbo_PT_NC.Qty_rejected,
dbo_PT_Task.TaskType,
dbo_PT_Task.Notes AS dbo_PT_Task_Notes,
dbo_PT_NC.Origin,
dbo_PT_NC.Origin_ref,
dbo_PT_NC.Origin_cause,
dbo_PT_NC.Origin_category
FROM dbo_PT_NC INNER JOIN dbo_PT_Task ON dbo_PT_NC.[NCR] = dbo_PT_Task.[TaskTypeID]
WHERE (((dbo_PT_NC.NCR_Date)>=#1/1/2016#) AND ((dbo_PT_Task.TaskSubType)="Origination"))
ORDER BY dbo_PT_NC.NCR_Date, dbo_PT_NC.Customer;
After I have this data pulled and put into a Snapshot (I do not want the Live Data to be accessible by the front end users) I'll be adding columns for a Weak Point Management System we are implementing, Fields Such as:
Scrap Code (lookup field to another table i've built inside excel)
Containment, Root Cause, Plan, Do, Check, and Act, all of which Should most likely be Memo Fields (As characters may break 255)
Date Completed A date the process was complete
This table (The data i've snapshotted and the new fields added) will need to be updated with New or Changed Records from the SQL Backend i've previously connected to.
UPDATE
Big thanks to Andre.. Got it working, Sample code below (i've added more update fields since)
UPDATE tblWeakPointMaster, dbo_PT_NC INNER JOIN dbo_PT_Task ON dbo_PT_NC.NCR = dbo_PT_Task.TaskTypeID
SET tblWeakPointMaster.Qty_rejected = [dbo_PT_NC].[Qty_rejected],
tblWeakPointMaster.dbo_PT_Task_Notes = [dbo_PT_Task].[Notes],
tblWeakPointMaster.Material = [dbo_PT_NC].[Material],
tblWeakPointMaster.Rev = [dbo_PT_NC].[Rev],
tblWeakPointMaster.NC_type = [dbo_PT_NC].[NC_type]
WHERE (((tblWeakPointMaster.NCR)=dbo_PT_NC.NCR) And ((tblWeakPointMaster.TaskID)=dbo_PT_Task.TaskID));
I assume there is a 1:n relation between PT_NC and PT_Task?
Then you should include both primary keys in the import SELECT.
Either use them as composite primary key in the Access tables instead of the new KEY column. Or if that is impractical because other tables are linking to tblWeakPointMaster, you can also keep that primary key.
But in any case, these two columns form the JOIN between tblWeakPointMaster and tblWeakPointUpdates.
All other columns can be used to update tblWeakPointMaster from tblWeakPointUpdates (assuming they can be edited in the original database).
Edit: if you don't use them as composite primary key, you need to create an unique index on the combination, or the JOIN will not be updateable, I think.

Select decoded JSON data from joined MySQL tables

Could you write me please how to make selection from MySQL database if I have two tables with JSON data. One of them has following structure:
Table Trees
(id, name, value) - three columns
which includes following data
1, trees, [{"name":"Oaktree","value":1,"target":null},{"name":"Appletree","value":2,"target":null},{"name":"Plumtree","value":3,"target":null}]
2, length, [{"name":"10m","value":1,"target":null},{"name":"15m","value":2,"target":null},{"name":"20m","value":3,"target":null}]
3, age, [{"name":"5y","value":1,"target":null},{"name":"10y","value":2,"target":null},{"name":"20y","value":3,"target":null}]
The second table has the following structure:
Table SelectedTrees
(properties) - only one column
which includes the following data
[{"id":"1","value":["1","3"]},{"id":"2","value":["1", "2", "3"]},{"id":"3","value":["2"]}]
it means selected data from Trees tables. id in properties column from selectedTrees coresponds to id column from Trees table. I would like to select from database real (json_decoded) values like:
Trees = Oaktree, Plumtree
Length = 10m, 15m, 20m
Age = 10y
How could I make this?
Thanks in advance.
Jan
In a nutshell, this is not possible. Relational databases are built for quickly comparing constant values that they can index. JSON is just a string to MySQL, and any kind of partial string matching triggers a so-called table scan, which is essentially going to become freaking slow when you get serious amounts of data.
You COULD get it to work like this:
SELECT * FROM Trees
JOIN SelectedTrees
ON properties LIKE CONCAT('"id":"', Trees.id, '"')
This is however just a hack that you should never want to use in any production system, and I advise against using it in a test system. Instead refactor your database so there's never going to be any JSON in there that you are going to match on in your queries. It's fine to store secondary data as JSON, just make sure the IDs and names are extracted before insertion, and then insert in separate columns in the database tables so the DB engine can do its relational magic.

Joining a table stored within a column of the results

I want to try and keep this as one query and not use PHP, but it's proving to be tough.
I have a table called applications, that stores all the applications and some basic information about them.
Then, I have a table with all the types of applications in it, and that table contains a reference to another table which stores more specific data about the specific type of application in question.
select applications.id as appid, applications.category, type.title as type, type.id as tid, type.valuefld, type.tablename
from applications
left join type on applications.typeid=type.id
left join department on type.deptid=department.id
where not isnull(work_cat)
and work_cat != ''
and applications.deleted=0
and datei between '10-04-14' and '11-04-14'
order by type, work_cat
Now, in the old version, there is another query on every single result. Over hundreds of results... that sucks.
This is the query I'd like to integrate so I can get all the data in one result row. (Old is ASP, I'm re-writing it in PHP)
query = "select sum("&adors.fields("valuefld")&") as cost, description from "&adors.fields("tablename")&" where appid = '"&adors.fields("tablename")&"'"
Prepared statements, I'm aware, are the best solution, but for now they are not an option.
You can't do this with a plain SQL query - you need to have a defined set of tables that your query is based on. The fact that your current implementation queries from whatever table is named by tablename from the first result-set means that to get this all in one query, you will have to restructure your data. You have to know what tables you're querying from rather than having it dynamic.
If the reason for these different tables is the different information stored in each requiring different record (column) structures, you might want to look into Key/Value pair storage in a large table. Once you combine the dynamically named ones into a single location you can integrate your two queries together.

adding data to interrelated tables..easier way?

I am a bit rusty with mysql and trying to jump in again..So sorry if this is too easy of a question.
I basically created a data model that has a table called "Master" with required fields of a name and an IDcode and a then a "Details" table with a foreign key of IDcode.
Now here's where its getting tricky..I am entering:
INSERT INTO Details (Name, UpdateDate) Values (name, updateDate)
I get an error: saying IDcode on details doesn't have a default value..so I add one then it complains that Field 'Master_IDcode' doesn't have a default value
It all makes sense but I'm wondering if there's any easy way to do what I am trying to do. I want to add data into details and if no IDcode exists, I want to add an entry into the master table. The problem is I have to first add the name to the fund Master..wait for a unique ID to be generated(for IDcode) then figure that out and add it to my query when I enter the master data. As you can imagine the queries are going to probably get quite long since I have many tables.
Is there an easier way? where everytime I add something it searches by name if a foreign key exists and if not it adds it on all the tables that its linked to? Is there a standard way people do this? I can't imagine with all the complex databases out there people have not figured out a more easier way.
Sorry if this question doesn't make sense. I can add more information if needed.
p.s. this maybe a different question but I have heard of Django for python and that it helps creates queries..would it help my situation?
Thanks so much in advance :-)
(decided to expand on the comments above and put it into an answer)
I suggest creating a set of staging tables in your database (one for each data set/file).
Then use LOAD DATA INFILE (or insert the rows in batches) into those staging tables.
Make sure you drop indexes before the load, and re-create what you need after the data is loaded.
You can then make a single pass over the staging table to create the missing master records. For example, let's say that one of your staging table contains a country code that should be used as a masterID. You could add the master record by doing something along the lines of:
insert
into master_table(country_code)
select distinct s.country_code
from staging_table s
left join master_table m on(s.country_code = m.country_code)
where m.country_code is null;
Then you can proceed and insert the rows into the "real" tables, knowing that all detail rows references a valid master record.
If you need to get reference information along with the data (such as translating some code) you can do this with a simple join. Also, if you want to filter rows by some other table this is now also very easy.
insert
into real_table_x(
key
,colA
,colB
,colC
,computed_column_not_present_in_staging_table
,understandableCode
)
select x.key
,x.colA
,x.colB
,x.colC
,(x.colA + x.colB) / x.colC
,c.understandableCode
from staging_table_x x
join code_translation c on(x.strange_code = c.strange_code);
This approach is a very efficient one and it scales very nicely. Variations of the above are commonly used in the ETL part of data warehouses to load massive amounts of data.
One caveat with MySQL is that it doesn't support hash joins, which is a join mechanism very suitable to fully join two tables. MySQL uses nested loops instead, which mean that you need to index the join columns very carefully.
InnoDB tables with their clustering feature on the primary key can help to make this a bit more efficient.
One last point. When you have the staging data inside the database, it is easy to add some analysis of the data and put aside "bad" rows in a separate table. You can then inspect the data using SQL instead of wading through csv files in yuor editor.
I don't think there's one-step way to do this.
What I do is issue a
INSERT IGNORE (..) values (..)
to the master table, wich will either create the row if it doesn't exist, or do nothing, and then issue a
SELECT id FROM master where someUniqueAttribute = ..
The other option would be stored procedures/triggers, but they are still pretty new in MySQL and I doubt wether this would help performance.

load csv file content into mysql table with validation

i want to import a large csv file (about 12MO) into mysql table , first i tried with LOAD DATA INFILE it work perfectly , but in my case i want firstly test the csv rows to determine if i want to update data or insert new records
so the solution is to read the file and compare the content of each row with the data already in the table , and make the right action
this methode also works but takes lot of time and resources
now my question is
1 : can i use the import functions of PHPMYADMIN ( open source )
and my project is comercial
2 : if yes i can , do you know some tutorials about this ( any idea )
3 : if no i can't , is there such commercial frameworks for exporting/importing
thanks
This is actually pretty common SQL: you either want to insert or update, yes? So you need two statements (one for update, one for insert) and a way to tell if it should be inserted. What you really need is a unique key that will never be duplicated for the individual record (can be a composite key) and two statements, like this:
UPDATE right SET
right1 = left1,
right2 = left2,
etc
FROM the_import_table right
LEFT JOIN the_existing_data left ON left.key = right.key
WHERE right.key IS NOT NULL --note that I write in TSQL
INSERT INTO right (
right1,
right2,
etc
) SELECT
left1,
left2,
etc
FROM left
LEFT JOIN right on left.key = right.key
WHERE right.key IS NULL
Note that you can use composite keys using a set of ANDed values in the WHERE, and note that you're not updating the composite keys but you are probably inserting the composite keys. This should be good to get you a start. Before you ask for clarification update your question with actual code.
MySQL has specific insert syntax to deal with duplicate rows.