importing csv into mysql via cron - mysql

I am a complete noob when it comes to mysql databases.
What i want to achieve - is this - i have a sap b1 database and i am going to be exporting data from the sql server out to a csv - from there i will send this csv through to my web server.
Now what i want to do is to load up the csv into a mysql database on a scheduled basis (daily) via a cron job.
Here is the data that i will likely have in multiple csvs:
orders
invoices
credits
payments
Would i create a database for each or have them all within one database within phpmyadmin?
Also - let's take orders for example - would i create two tables - one for the order header information and another for order lines?
An example of the invoices csv would be the following format:
customernumber
customername
invoicenumber
purchaseordernumber
documentdate
freightamount
productcode
productname
barcode
quantity
price ex tax
price inc tax
RRP price
tax amount
doc total inc tax
Once in the tables - i will then go about developing a secure website/ application for my company that will be used by internal staff as well as customers.
Any advice would be appreciated.
Regards
Rick

One way to look at CSV files is that each is a table:
Header1,Header2,Header3
Value1,Value2,Value3
...,...,...
->
Header1 | Header2 | Header3
---------------------------
Value1 | Value2 | Value3
... | ... | ...
In mysql, a single database can have many tables. So for your example, you may want to have a single database with a table for each CSV file.

Related

MySql for Excel - Imported order error

I'm using MySQL for Excel plug-in (1.3.7 version) to import data from my MySQL database into my excel however, excel is only changing the order of the columns (alphabetical order) while the data remain in the same order.
The data rows appear in the order I want, but the header row is wrong!
For example:
If my table is (in MySQL Workbench):
id | Name | Date_of_birth
01 | Test1 | 01/01/2001
02 | Test2 | 02/02/2002
Excel Tries to import as:
Date_of_birth | id | Name ---> (ALPHABETICAL ORDER)
01 | Test | 01/01/2001
02 | Test2 | 02/02/2002
Because the "Name" column is a varchar(100), it does not accept DATE type values below it.
So, I can not import anything into my excel.
The only way that I've found to solve my problem is to put my table in alphabetical order (inside the MYSQL Workbench). But it is very inefficient and I don't want to do that with every table.
Could you help me?
Thank you very much for your attention.
If you are copying and pasting, try using the "text to columns" button in Excel under the Data tab.
Excel shouldn't be sorting these automatically. Start with a blank worksheet if you can and see if you have the same problem.
Otherwise, please post how you are moving the data from Workbench to Excel. It's likely that is the problem.
Got stuck on this for a while. I am surprised I could not find more complaints about this issue.
Deselcting the mysql addin, restarting excel and then reselecting the addin did the trick for me.
Mysql addin
File->options-> addins- manage->com add ins ->go

Access 2016 prevent double loading of data

My Setup:
I have a decently large table where each record should be all sales for a specific store for that day.
For example the records look roughly like:
Location | Date | Sales | etc.
Store 1 | 1/29/2018 | $20 | etc.
Store 2 | 1/29/2018 | $5 | etc.
Store 1 | 1/30/2018 | $25 | etc.
Store 2 | 1/30/2018 | $10 | etc.
In short you should NEVER have the same store on the same day more than once.
What's the best way to check this? Can I do data validation on my records (i'm assuming no because my understanding is it won't check vs the loaded data), or do I need to write something in VBA (i'm currently using canned saved imports but if it's a must I can write something).
I have an automated daily append to the table, but occasionally things get messed up and stripping out a days worth of duplicate data manually is obviously not ideal.
My original answer was:
Access can help you to detect those duplicates stores and days easily
with the query assistant. Just design a "search for duplicates" query,
using as criteria the fields you don't want to be repeated (in your
question, I understand those fields are Location and Date
OP tried and said:
Yeah it works. Really just easier to handle by importing to a temp
table and then using a query to check it for duplicates before loading
as opposed to arcane data validation rules
So OP could resolve the problem by importing the data to a temp table, and then using the "check for duplicates" query, before loading the data to non-temp tables.

Mysql : Best way to deal with data like average rating of restaurants etc?

I have a MySQL table which stores restaurant ratings like:
id | user_id | res_id | rating_value | review_id |
1 | 102 | 5567 | 4.0 | 26 |
2 | 106 | 5543 | 3.5 | 27 |
3 | 112 | 5567 | 3.0 | 31 |
and I have Restaurant Profile webpage for each restaurant which shows 'Data' like 'average rating' of the restaurant by users.
Users can Review & Rate the restaurant per/day with a limited number of times, so a single restaurant may receive many new rating rows per/day just form one user alone.
My question is:
Should I run a cronjob daily ( or weekly?) to SELECT AVG(rating_value) of each restaurant to update the 'Rating' of the restaurant, will this consume alot of memory?
Should I just keep like X number of recent 'ratings' and use cronjob to SELECT AVG(rating_value) it daily for each restaurant?
3.Or I should only run the SELECT AVG(rating_value) when a new 'Rating' is submitted?
It sounds like you want to keep the load on your database light for a value that doesn't change unless a new rating is submitted for that restaurant.
I would suggest caching this information in something like memcache or redis since duplicating this average in your database is redundant. You can then set an expiry on this (say one hour) and only go to the DB for the value if it's not in your caching solution.
If you want a real time solution then you should hook into your review submit logic to refresh the cached average value for the restaurant. This will guarantee that your application always displays the most accurate average rating.
If you want to store the data into the database, then I would recommend either a DB trigger to update a table storing this average rating field, or like with the caching solution, have submission of a review hook into updating this value.
There are many ways to handle this kind of queries.
Like some :-
You can run SELECT AVG(rating_value) on add new rating (make the column indexed and unique to fetch result quickly)
Second :-
You can run SELECT AVG(rating_value) on add new rating and save it in
cache by restaurant id or any other way.
Third :-
On add new rating , you can run that query by cronjob on hourly basis
and save it in cache, so the load to database will be reduced.
Thanks

Implementing condition in SSIS

I'm importing data from txt to Sql Server table. That part works good.
Everyday this txt file is being deleted and new txt file is formed (i.e. yesterday there was data for 3 February, today for 4 February (column Date)).
When I run package, I want it to check whether Date column exists in database table. If it exists, skip, don't import, if it doesn't - import. And I want to save that Date value in a variable for further manipulations. How can I accomplish that?
we suppose you have your source file with the format and data as bellow
id | product | dateLoad
1 | dell | 25-01-2016 16:23:14
2 | hp | 25-01-2016 16:23:15
3 | lenovo | 25-01-2016 16:23:16
and your destination have the format as bellow
create table stack(id int,product varchar(20),dateLoad smalldatetime);
In your SSIS add a Derived Column to convert the smalldatetime to date like this :
Secondly add a Lookup in General Tab in your Lookup transformation Editor go to Specify how to handle rows with no matching entries and select Redirect rows to no match output. In Connection Tab add a connection to target table and i wrote a Sql query to convert the smalldatetime to date show the picture as bellow :
In Column tab do this :
Finally add a connection with the lookup and your target table and select Lookup no matching output
In the first execution i have 3 rowsinserted because i don't have the date in my table
I execute another time but i had 0 rows because i have the date in my table
I hope that help you

How do I make a MySQL query that's equivalent to Fusion Tables "summarize" function?

I am parsing a collection of monthly lists of bulletin board systems from 1993-2000 in a city. The goal is to make visualizations from this data. For example, a line chart that shows month by month the total number of BBSes using various kinds of BBS software.
I have assembled the data from all these lists into one large table of around 17,000 rows. Each row represents a single BBS during a single month in time. I know this is probably not the optimal table scheme, but that's a question for a different day. The structure is something like this:
date | name | phone | codes | sysop | speed | software
1990-12 | Aviary | xxx-xxx-xxxx | null | Birdman | 2400 | WWIV
Google Fusion Tables offers a function called "summarize" (or "aggregation" in the older version). If I make a view summarizing by the "date" and "software" columns, then FT produces a table of around 500 rows with three columns: date, software, count. Each row lists the number of BBSes using a given type of software in a given month. With this data, I can make the graph I described above.
So, now to my question. Rather than FT, I'd like to work on this data in MySQL. I have imported the same 17,000-row table into a MySQL database, and have been trying various queries with COUNT and DISTINCT, hoping to return a list equivalent what I get from FT's Summarize function. But nothing I've tried has worked.
Can anyone suggest how to structure such a query?
Kirkman, you can do this using a COUNT function and the GROUP BY statement (which is used in conjunction with aggregate SQL functions)
select date, software, count(*) as cnt
from your_table
group by date, software