task scheduling laravel get data from database - mysql

I'm new at this guys(Laravel) and I need to store my data collected from my DB each six hours. I thought of using laravel's task scheduling, but in the kernel where I'm supposed to create my cron job, I don't think that I can return my data as an array
I use 6 arrays each one of them in a different page.
Here is an exemple of one of them :
$dataGlobal = DB::select("select * from articles order by articleDate ASC");
PS: the code bellow is in my controller and I return the array with its view
Kernel.php
protected function schedule(Schedule $schedule)
{
// $schedule->command('inspire')->hourly();
$schedule->call(function(){
DB::select("select * from articles order by articleDate ASC");
})->cron('* */6 * * *');
}
Any ideas guys ? How can I store my data in an array where m going to use it in my pages (It's already working when I stop manually the project and relaunch it with php artisan serve)

You can't return data from cron job.
I think what you need is:
Create cron job that collect data each 6 hours and store it on table to your database. so you could use that data from the table you have created.

Since I solved my issue, I'm sharing it here if someone faced the same thing.
Laravel is a PHP's Framework, and php is a scripting language. Each time a person naviguate through my site each row is executed at that time.
By doing so, my array will always contain latest rows of my database. I don't really need a cron-job for this.

Related

Rails massive data upload and nested records

I have to update a lot of data to mysql (~100Mio records!). Some records already exists, some have to be created. I also have to create some nested resources for each record.
I know the activerecord-import gem but as far as i know it can't handle nested records (or only with ugly workarounds). The issue is that I dont know the ID's for all nested records before they are created - and creating them in single queries takes time.
So lets say there is a model called Post and can have many Comments. My current code looks like this:
Post.transaction do
import_posts.each do |import_post|
post = Post.find_or_initialize_by(somevalue: import_post['somevalue']
post.text = import_post['text']
import_post['comments'].each do |import_comment|
comment = post.comments.find_or_initialize_by(someothervalue: import_comment['someothervalue'])
comment.text = import_comment['text']
end
post.save(validate: false) #Dont need validation - saves some time
end
end
This is just an example and it works but its far away from 'damn fast'. Are there any ideas how to speed up the data upload? Am I totally wrong?
Im working with Rails5 and ruby 2.4.
Thanks in advance!

cakePHP auto delete data from mysql

The table has following fields: id, date, user_id, city, price
I would like the rows that are older than 1 year to be automaticaly deleted, but I am not really sure about how to do this. Should it be done by application or database? How?
Another issue here is that there are going to be ~50k inserts every year. What will happen to 'id' field in a couple of years? Won't the number get too large? What could be done in this case?
Many thanks in advance.
I would suggest creating a basic shell that handles deleting the records, and schedule a cron job that runs whenever you wish to check for records that should be deleted. The shell could be very simple, something like this:
class JanitorShell extends AppShell {
// put the model you wish to delete from here
public $uses = array('Model');
public function main() {
$this->Model->deleteAll(array(
'Model.created <' => date('Y-m-d', '-1 year')
));
$this->out('Records deleted.');
}
}
Then your cron job would run your shell. To run your shell, call:
cake -app /path/to/app janitor
This assumes cake is in your PATH. Of course, this is a very basic shell and could be easily expanded, keeping logs of what's been deleted or even just moving deleted records to a new table, a sort of 'soft delete'. You should probably not put such destructive code in main() since it runs each time you run the shell, but this will get you started.
To answer your second question, 50k inserts a year is nothing to fret about. To know the limits of your primary keys, read up on MySQL datatypes.
actually I think it should be
date('Y-m-d', strtotime('-1 year'))

Auto update prices in database, mysql

I am currently getting products from one site, storing them in a database, and then having their prices display on another site. I am trying to get the prices from the one site to update daily in my database so the new updated prices can be displayed onto my other site.
Right now I am getting the products using an item number but have to manually go in and update any prices that have changed.
I am guessing I am going to have to use some kind of cronjob but not sure how to do this. I have no experience with cronjobs and am a noob with php.
Any ideas?
Thanks!
I have done some reading on the foreach loop and have written some code. But my foreach loop is only running once for the first item number. The foreach loop runs then goes to the "api.php" page but then stops. It doesn't continually loop for each item number. How do I tell it to go through all of item numbers in my database?
Also if you see anything else wrong in my code please let me know.
Thanks
....
$itemnumber = array("".$result['item_number']."");
foreach ($itemnumber as $item_number) {
echo "<form method=\"post\" action=\"api.php\" name=\"ChangeSubmit\" id=\"ChangeSubmit\">";
echo "<input type=\"text\" name=\"item_number\" value=\"{$item_number}\" />";
echo "<script type=\"text/javascript\">
function myfunc () {
var frm = document.getElementById(\"ChangeSubmit\");
frm.submit();
}
window.onload = myfunc;
</script></form>";
}
}
If you already retrieve the product data from an external site and store it in a local database, updating the prices from the same source should be no problem to you. Just retrieve the data, iterate through it in a foreach loop or similar and update the prices to the database based on the item number.
Once you have created the update script and run it manually, adding it as a cronjob will be as simple as running the command `crontab -e´ and adding this row to execute your script every midnight:
0 0 * * * /usr/local/bin/php /path/to/your/script.php
Don't forget to use the correct path for PHP for your system, running which php in the shell will tell you the path.
If you have cronjob's on your server, it'll be very apparent- You make a PHP script that updates it, and throw it in a daily cronjob.
However, I do it this way:
Method 1: At the beginning of every page request, check the last "update" time (you choose how to store it). If it's been more than a day, do the update and set the "update" time to the current time.
This way, every time someone loads a page and it's been a day since the last update, it updates for them. However, this means it's slower for random users, once a day. If this isn't acceptable, there's a little change:
Method 2: If you need to update (via the above method of checking), start an asyncronous request for the data, handle the rest of the page, flush it to the user, then in a while loop wait until the request finishes and update it.
The downside to method 2 is that the user won't see the updated values, but, the benefit is that it won't be any more of a wait for them.

Last inserted id in cakephp

I use this code but its not working in cakephp and the code is:
$inserted = $this->get_live->query("INSERT INTO myaccounts (fname) values('test');
After this im using:
$lead_id = $this->get_live->query("SELECT LAST_INSERT_ID()");
It's working, but only one time.
Try this. Lots less typing. In your controller, saving data to your database is as simple as:
public function add() {
$data = "test";
$this->Myaccount->save($data);
// $this->set sends controller variables to the view
$this->set("last", $this->Myaccount->getLastInsertId());
}
You could loop through an array of data to save with foreach, returning the insertId after each, or you could use Cake's saveAll() method.
Myaccount is the Model object associated with your controller. Cake's naming convention requires a table called "myaccounts" to have a model class called "Myaccount" and a controller called "Myaccounts_Controller". The view files will live in /app/views/myaccounts/... and will be named after your controller methods. So, if you have a function add()... method in your controller, your view would be /app/Views/Myaccounts/add.ctp.
The save() method generates the INSERT statement. If the data you want to save is located in $this->data, you can skip passing an argument in; it will save $this->data by default. save() even automagically detects whether to generate an UPDATE or an INSERT statement based on the presence of an id in your data.
As a rule of thumb, if you're using raw sql queries at any point in Cake, you're probably doing it wrong. I've yet to run into a query so monstrously complex that Cake's ORM couldn't model it.
http://book.cakephp.org/2.0/en/models/saving-your-data.html
http://book.cakephp.org/2.0/en/models/additional-methods-and-properties.html?highlight=getlastinsertid
HTH :)
You can get last inserted record id by (works for cakePHP 1.3.x and cakePHP 2.x)
echo $this->ModelName->getLastInsertID();
Alternately, you can use:
echo $this->ModelName->getInsertID();
CakePHP 1.3.x found in cake/libs/model/model.php on line 2775
CakePHP 2.x found in lib/Cake/Model/Model.php on line 3167
Note: This function doesn't work if you run the insert query manually
pr($this->Model->save($data));
id => '1'
id is a last inserted value

How IQueryables are dealt with in ASP.NET MVC Views?

I have some tables in a MySQL database to represent records from a sensor. One of the features of the system I'm developing is to display this records from the database to the web user, so I used ADO.NET Entity Data Model to create an ORM, used Linq to SQL to get the data from the database, and stored them in a ViewModel I designed, so I can display it using MVCContrib Grid Helper:
public IQueryable<TrendSignalRecord> GetTrends()
{
var dataContext = new SmgerEntities();
var trendSignalRecords = from e in dataContext.TrendSignalRecords
select e;
return trendSignalRecords;
}
public IQueryable<TrendRecordViewModel> GetTrendsProjected()
{
var projectedTrendRecords = from t in GetTrends()
select new TrendRecordViewModel
{
TrendID = t.ID,
TrendName = t.TrendSignalSetting.Name,
GeneratingUnitID = t.TrendSignalSetting.TrendSetting.GeneratingUnit_ID,
//{...}
Unit = t.TrendSignalSetting.Unit
};
return projectedTrendRecords;
}
I call the GetTrendsProjectedMethod and then I use Linq to SQL to select only the records I want. It is working fine in my developing scenario, but when I test it in a real scenario, where the number of records is way greater (something around a million records), it stops working.
I put some debug messages to test it, and everything works fine, but when it reaches the return View() statement, it simply stops, throwing me a MySQLException: Timeout expired. That let me wondering if the data I sent to the page is retrieved by the page itself (it only search for the displayed items in the database when the page itself needs it, or something like that).
All of my other pages use the same set of tools: MVCContrib Grid Helper, ADO.NET, Linq to SQL, MySQL, and everything else works alright.
You absolutely should paginate your data set before executing your query if you have millions of records. This could be done using the .Skip and .Take extension methods. And those should be called before running any query against your database.
Trying to fetch millions of records from a database without pagination would very likely cause a timeout at best.
Well, assuming information in this blog is correct, .AsPagination method requires you to sort your data by a particular column. It's possible that trying to do an OrderBy on a table with millions of records in it is just a time consuming operation and times out.