OBD-II Perl code hangs after specific number of AT requests - obd-ii

After connecting to a Bluetooth OBD-II adapter, I am able to get
data by sending PID service requests, but they stop exactly after
the same number of requests.
for ( ;; ) {
obj -> write ( "010C\r" );
if ( $data = $obj -> input ) {
print "$data";
}
obj -> write ( "010D\r" );
if ( $data = $obj -> input ) {
print "$data";
}
}
Can you please let me know what could be the problem? I read
somewhere about two options 1) Re-initializing and 2) about
buffer left with CRs. I am looking into those.
(I used Torque on my android with the same OBD-II Bluetooth
adapter and it seems to get the data continuously. So there
must be something wrong in what I am doing).
Thank you for any response.

Related

How to update mysql database from csv files in groovy-grails?

I have a table in my database which needs to updates with value for some rows and columns from csv file( this file outside of the grails application). The csv file contains large set of data with map to specific address and city. Some of the address in my application have wrong cities. So I want to get a city from database(grails application db), compare it with the city in csv file, map address to it, and add that address to the application database.
what is the best approach?
For Grails 3 use https://bintray.com/sachinverma/plugins/org.grails.plugins:csv to parse CSV, add the following to build.gradle. The plugin is also available for Grails 2.
repositories {
https://bintray.com/sachinverma/plugins/org.grails.plugins:csv
}
dependencies {
compile "org.grails.plugins:csv:1+"
}
Then in your service use like:
def is
try {
is = params.csvfile.getInputStream()
def csvMapReader = new CSVMapReader( new InputStreamReader( is ) )
csvMapReader.fieldKeys = ["city","address1", "address2"]
csvMapReader.eachWithIndex { map, idx ->
def dbEntry = DomainObject.findByAddress1AndAddress2( map.address1, map.address2 )
if ( map.city != dbEntry.city ) {
// assuming we're just updating the city on current entry?
dbEntry.city = map.city
dbEntry.save()
}
// do whatever logic
}
finally {
is?.close
}
This is of course a simplified version as I don't know you're csv or schema layout.

Powershell refuses to sort hashtable initially or keep it sorted [duplicate]

I have a hash table here and I have it eventually outputting to an Excel spreadsheet, but the issue appears to be the way the system sorts the hash table by default. I want it to return the machines in the same order that they are inputted, they way it currently works is a box pops up and you paste in all your machine names so they are all in memory prior to the foreach loop. I was previously sorting this by the longest uptime but it now needs to be the same way they are inputted. My initial thought is to create another hash table and capture them in the same order versus the $machineList variable, but that might even leave me in the same position. I tried to search but I couldn't find info on the default way that hash tables sort.
Any ideas?
$machineUptime = #{}
foreach($machine in $machineList){
if(Test-Connection $machine -Count 1 -Quiet){
try{
$logonUser = #gets the logged on user
$systemUptime = #gets the wmi property for uptime
if($logonUser -eq $null){
$logonUser = "No Users Logged on"
}
$machineUptime[$machine] = "$systemUptime - $logonUser"
}
catch{
Write-Error $_
$machineUptime[$machine] = "Error retrieving uptime"
}
}
else{
$machineUptime[$machine] = "Offline"
}
}
Create $machineUptime as an ordered hashtable (provided you have PowerShell v3 or newer):
$machineUptime = [ordered]#{}

'Not an ARRAY reference' error thrown

I'm writing a Perl script that is meant to deal with an API which returns metrics about a set of URLs that I pull from MySQL then post these metrics back into a different table. Currently this piece of code:
my $content = $response->content;
my $jsontext = json_to_perl($content);
my $adsql = 'INSERT INTO moz (url_id,page_authority,domain_authority,links,MozRank_URL,MozRank_Subdomain,external_equity_links) VALUES (?,?,?,?,?,?,?)';
my $adrs = $db->prepare( $adsql );
my $adsql2 = 'UPDATE url
SET moz_crawl_date = NOW()
where url_id = ?;';
my $adrs2 = $db->prepare( $adsql2 );
my $currentUrlId = 0;
foreach my $row (#$jsontext){
$adrs->execute($url_ids[$currentUrlId], $row->{'fmrp'}, $row->{'upa'}, $row->{'pda'}, $row->{'uid'}, $row->{'umrp'}, $row->{'ueid'});# || &die_clean("Couldn't execute\n$adsql\n".$db->errstr."\n" );
$adrs2->execute($url_ids[$currentUrlId]);
$currentUrlId++;
}
is throwing this error:
Not an ARRAY reference at ./moz2.pl line 124.
this is line 124:
foreach my $row (#$jsontext){
this whole chunk of code is in a while loop. I am actually able to iterate a couple times and fill my MySQL table before the script fails (technically the program works, but I don't want to just leave an error in it).
Anybody have any suggestions?
Perl gave you the correct answer
Not an ARRAY reference: #$jsontext
You are dereferencing $jsontext, which is the result of json_to_perl(string), to an array.
But json_to_perl() didn't return an arrayref.
json_to_perl seems to be from this API: http://search.cpan.org/~bkb/JSON-Parse-0.31/lib/JSON/Parse.pod#json_to_perl
which returns according to the doc either an arrayref or a hashref.
Apparently it did return a hashref in your case, so you have to add the logic to deal with the HASH case. Which seems to be a single row.
if (ref $jsontext eq 'HASH') {
# seems to be a single row
$adrs->execute($url_ids[$currentUrlId], $jsontext->{'fmrp'}, $jsontext->'upa'}, $jsontext->'pda'}, $jsontext->'uid'}, $jsontext->'umrp'}, $jsontext->'ueid'});# || &die_clean("Couldn't execute\n$adsql\n".$db->errstr."\n" );
$adrs2->execute($url_ids[$currentUrlId]);
$currentUrlId++;
} elsif (ref $jsontext eq 'ARRAY') {
foreach my $row (#$jsontext){
$adrs->execute($url_ids[$currentUrlId], $row->{'fmrp'}, $row->{'upa'}, $row->{'pda'}, $row->{'uid'}, $row->{'umrp'}, $row->{'ueid'});# || &die_clean("Couldn't execute\n$adsql\n".$db->errstr."\n" );
$adrs2->execute($url_ids[$currentUrlId]);
$currentUrlId++;
}
}

perl script to create xml from mysql query - out of memory

I need to generate an XML file from database records, and I get the error "out of memory". Here's the script I am using, it's found on Google, but it's not suitable for me, and it's also killing the server's allocated memory. It's a start though.
#!/usr/bin/perl
use warnings;
use strict;
use XML::Simple;
use DBI;
my $dbh = DBI->connect('DBI:mysql:db_name;host=host_address','db_user','db_pass')
or die DBI->errstr;
# Get an array of hashes
my $recs = $dbh->selectall_arrayref('SELECT * FROM my_table',{ Columns => {} });
# Convert to XML where each hash element becomes an XML element
my $xml = XMLout( {record => $recs}, NoAttr => 1 );
print $xml;
$dbh->disconnect;
This script only prints the records, because I tested with a where clause for a single row id.
First of all, I couldn't manage to make it to save the output to a file.xml.
Second, I need somehow to split the "job" in multiple jobs and then put together the XML file all in one piece.
I have no idea how to achieve both.
Constraint: No access to change server settings.
These are problem lines:
my $recs = $dbh->selectall_arrayref('SELECT * FROM my_table',{ Columns => {} });
This reads the whole table into memory, representing every single row as an array of values.
my $xml = XMLout( {record => $recs}, NoAttr => 1 );
This is probably even larger structure, it is a the whole XML string in one go.
The lowest memory-use solution needs to involve loading the table one item at a time, and printing that item out immediately. In DBI, it is possible to make a query so that you fetch one row at a time in a loop.
You will need to play with this before the result looks like your intended output (I haven't tried to match your XML::Simple output - I'm leaving that to you:
print "<records>\n";
my $sth = $dbh->prepare('SELECT * FROM my_table');
$sth->execute;
while ( my $row = $sth->fetchrow_arrayref ) {
# Convert db row to XML row
print XMLout( {row => $row}, NoAttr => 1 ),"\n";
}
print "</records>\n";
Perl can use e.g. open( FILEHANDLE, mode, filename ) to start access to a file and print FILEHANDLE $string to print to it, or you could just call your script and pipe it to a file e.g. perl myscript.pl > table.xml
It's the select * with no contraints that will be killing your memory. Add some constraint to your query ie date or id and use a loop to execute the query and do your output in chunks. That way you won't need to load the whole table in mem before your get started on the output.

Trouble getting Google Books API data via PHP

The current code takes form input and does THIS to it:
$apikey = 'myapikey';
$q = urlencode($bookSearchTerm);
$endpoint = 'https://www.googleapis.com/books/v1/volumes?q=' . $q . '&key=' . $apikey;
$session = curl_init($endpoint);
curl_setopt($session, CURLOPT_RETURNTRANSFER, true);
$data = curl_exec($session);
curl_close($session);
$search_results = json_decode($data);
if ($search_results === NULL) die('Error parsing json');
Just for kicks, I also did
echo $endpoint;
which shows
https://www.googleapis.com/books/v1/volumes?q=lord+of+the+rings&key=myapikey
When I enter that URL into a browser, I get a screen full o' data, telling me that, among other things, there are 814 items.
Yet when I run the code on a page, I get
Error parsing json
Can someone show me where I'm going wrong?
Thanks in advance.
By the response to the comment, it could be set, maybe not. It may also be the case that what is returned isn't parse-able by the parser because it isn't in the right data format. Check what $data gives back as a result. If it's correct then by the json_decode doc on the PHP site it may be that it's simply too big for the parser to parse (reaches recursion limit) though I doubt that.
It it is possible that the what is return just overflows the PHP allocated memory limit. PHP parses JSON into associative or numbered arrays, which can get expensive. So if what you get back is just too big, it'll fail to finish parsing and just return null.