Geo IP database query problem - mysql

I'm running this query
SELECT
country,
countries.code,
countries.lat,
countries.lng,
countries.zoom,
worldip.start,
worldip.end
FROM countries, worldip
WHERE countries.code = worldip.code
AND
'91.113.120.5' BETWEEN worldip.start AND worldip.end
ORDER BY worldip.start DESC
on a tables with these fields,
worldip countries
-------------- ----------------
start code
end country
code lat
country_name lng
zoom
And sometimes I'm getting two results in two different countries for one ip. I understand why
'91.113.120.5' BETWEEN worldip.start AND worldip.end
would return two different results since 10 is between 9 and 11, but also 5 and 12. I would have thought including WHERE countries.code = worldip.code would have prevented this, or at least ensure I got the right country no matter how many results it returned. but it doesn't.
I also added ORDER BY worldip.start DESC which seems to work since the more accurate an ip adress, the higher up the list it appears. you can see it working (or not) here . But that's a quick fix and I'd like to do it right.
SQL is a real weak point for me. Can anyone explain what I'm doing wrong?

Firstly nice app. I was looking for flights - I would love price comparisons and no #based links please. You could try a free geolocator service instead of using your own geoip database. That aside is your ip field of 'IP' datatype in MySQL allowing comparison ? This may help you get correct ordinality. Otherwise the stuff is compared as strings and problems may arise where the length of IP's is different and so on.
With integer representation of IP's you can use the <= and >= operators.

Related

MySQL: using calculated counter column as key for sub-query

Sorry, long pre-history, but it is needed to clarify the question.
In my org the computers have names like CNT30[0-9]{3}[1-9a-z], for example cnt300021 or cnt30253a.
Last symbol is a "qualifier", so single workplace may have equally named computers assigned to it, distinguished by this qualifier. For example cnt300021 may mean desktop computer on workplace #002, and cnt30002a may mean notebook assigned for same workplace. Workplaces are "virtual" and their existence made just for our (IT dept) convenience.
Each dept have its own unique range [0-9]{3}. For example, computers of accounting have names starting cnt302751 upto cnt30299z which gives them 25 unique workplaces max, with up to 35 computers per workplace. (IRL most users have one desktop PC, much lesser have desktop and notebook, and only 2 or 3 technicians have more than one notebook at their disposal).
Recently, doing some inventory of computers' passports (unsure about a term: a paper, which means for computer the same, what a passport means for human), I found that there some holes in sequential numbering. For example, we have cnt302531 and cnt302551, but have no cnt302541, which means that there's no workplace #254.
What I want to do? I want to find these gaps without manual searching. For this I need a cycle from 1 to MaxComp=664 (no more workplace numbers assigned yet)
That's what I could write using some pseudo-SQL-BASIC:
for a=0 to MaxComp
a$="CNT30"+right(a+1000,3)
'comparing only 8 leftmost characters, ignoring 9th one - the qualifier
b$=(select name from table where left(name,8) like a$)
print a$;b$
next a
That code should give me two colummns: possible names and existing ones.
But I can't figure out how to implement this in SQL-query. What I tried:
# because of qualifier there may be several computers with same
# 8 leftmost characters
select #cnum:=#cnum+1 as CompNum, group_concat(name separator ',')
# PCs are inventoried by OCS-NG Inventory software
from hardware
cross join (select #cnum:=0) cnt
where left(hardware.name,8)=concat('CNT30',right(#cnum+1000,3))
limit 100
But this construct returns exactly one row. And I can't understand, if it is possible without using the stored procedures, and what I did wrong if it is possible?
Found working path:
(at first I tried to use stored function)
CREATE FUNCTION `count_comps`(num smallint) RETURNS tinytext CHARSET utf8
BEGIN
return (select group_concat(name separator ',')
from hardware where left(hardware.name,8)=concat('CNT30',right(num+1000,3))
);
END
Then I tried hard to replicate function's results in subquery. And I did it! Note: the inner select returns exactly same results as function does
# Starting point. May be INcreased to narrow the results list
set #cnum:=0;
select
#cnum:=#cnum+1 as CompNum,
concat('CNT30',right(#cnum+1000,3)) as CalcNum,
# this
count_comps(#cnum) as hwns,
# and this gives equal results
(select group_concat(name separator ',')
from hardware where left(name,8)=calcnum
) hwn2
from hardware
# no more dummy tables here
# Ending point. May be DEcreased to narrow the results list
where #cnum<665;
So, the wrong part of "classical" approach was the usage of dummy table, which seems to be not necessary.
Partial results example (starting set #cnum:=479;, finishing where #cnum<530;):
CompNum, CalcNum, hwns, hwn2
'488', 'CNT30488', 'CNT304881', 'CNT304881'
'489', 'CNT30489', 'CNT304892', 'CNT304892'
'490', 'CNT30490', 'CNT304901,CNT304902,CNT304903', CNT304901,CNT304902,CNT304903'
'491', 'CNT30491', NULL, NULL
'492', 'CNT30492', NULL, NULL
'493', 'CNT30493', 'CNT304932', 'CNT304932'
'494', 'CNT30494', 'CNT304941', 'CNT304941'
I found that there no workplaces #491 and #492. On next adding the PCs for the 'October Region' dept (range 480-529), at least two of new PCs will get names CNT304911 and CNT304921, filling this gap.

Error: MySQL client ran out of memory

Can anyone please advise me on this error...
The database has 40,000 news stories but only the fields 'story' is large,
'old' is a numeric value 0 or 1,
'title' and 'shortstory' are very short or NULL.
any advice appreciated. This is the result of running a search database query.
Error: MySQL client ran out of memory
Statement: SELECT news30_access.usehtml, old, title, story, shortstory, news30_access.name AS accessname, news30_users.user AS authorname, timestamp, news30_story.id AS newsid FROM news30_story LEFT JOIN news30_users ON news30_story.author = news30_users.uid LEFT JOIN news30_access ON news30_users.uid = news30_access.uid WHERE title LIKE ? OR story LIKE ? OR shortstory LIKE ? OR news30_users.user LIKE ? ORDER BY timestamp DESC
The simple answer is: don't use story in the SELECT clause.
If you want the story, then limit the number of results being returned. Start with, say, 100 results by adding:
limit 100
to the end of the query. This will get the 100 most recent stories.
I also note that you are using like with story as well as other string columns. You probably want to be using match with a full text index. This doesn't solve your immediate problem (which is returning too much data to the client). But, it will make your queries run faster.
To learn about full text search, start with the documentation.

Long query time

I have a query to post last 10 tv show episodes by sorting it by date (from newest to oldest) like this:
return $this->getEntityManager()->createQuery('SELECT t FROM AppBundle:TvShow t JOIN t.episodes e ORDER BY e.date DESC')->setFirstResult(0)->setMaxResults(10)->getResult();
It returns only 9 nine episode. We have similar queries in same page too, they are working fine. When i setMaxResults to (11) just then it returns 10 episodes.
Another issue related with this query is: it takes too long compared to other similar queries. (about 200ms)
What do you suggest for me?
Thanks in advance.
Like in Richard answer - wrong result with setMaxResults and fetch-joined collection is doctrine normal behaviour.
To make it works you can use Doctrine Pagination (from Doctrine 2.2) (docs: http://docs.doctrine-project.org/en/latest/tutorials/pagination.html)
Example usage:
use Doctrine\ORM\Tools\Pagination\Paginator;
$query->setMaxResults($limit);
$query->setFirstResult($offset);
$results = new Paginator($query, $fetchJoin = true);
Long query time looks like a topic for another question.
Straight from the documentation:
If your query contains a fetch-joined collection specifying the result limit methods are not working as you would expect. Set Max Results restricts the number of database result rows, however in the case of fetch-joined collections one root entity might appear in many rows, effectively hydrating less than the specified number of results.
https://doctrine-orm.readthedocs.org/en/latest/

MySQL order by problems

I have the following codes..
echo "<form><center><input type=submit name=subs value='Submit'></center></form>";
$val=$_POST['resulta']; //this is from a textarea name='resulta'
if (isset($_POST['subs'])) //from submit name='subs'
{
$aa=mysql_query("select max(reservno) as 'maxr' from reservation") or die(mysql_error()); //select maximum reservno
$bb=mysql_fetch_array($aa);
$cc=$bb['maxr'];
$lines = explode("\n", $val);
foreach ($lines as $line) {
mysql_query("insert into location_list (reservno, location) values ('$cc', '$line')")
or die(mysql_error()); //insert value of textarea then save it separately in location_list if \n is found
}
If I input the following data on the textarea (assume that I have maximum reservno '00014' from reservation table),
Davao - Cebu
Cebu - Davao
then submit it, I'll have these data in my location_list table:
loc_id || reservno || location
00001 || 00014 || Davao - Cebu
00002 || 00014 || Cebu - Davao
Then this code:
$gg=mysql_query("SELECT GROUP_CONCAT(IF((#var_ctr := #var_ctr + 1) = #cnt,
location,
SUBSTRING_INDEX(location,' - ', 1)
)
ORDER BY loc_id ASC
SEPARATOR ' - ') AS locations
FROM location_list,
(SELECT #cnt := COUNT(1), #var_ctr := 0
FROM location_list
WHERE reservno='$cc'
) dummy
WHERE reservno='$cc'") or die(mysql_error()); //QUERY IN QUESTION
$hh=mysql_fetch_array($gg);
$ii=$hh['locations'];
mysql_query("update reservation set itinerary = '$ii' where reservno = '$cc'")
or die(mysql_error());
is supposed to update reservation table with 'Davao - Cebu - Davao' but it's returning this instead, 'Davao - Cebu - Cebu'. I was previously helped by this forum to have this code working but now I'm facing another difficulty. Just can't get it to work. Please help me. Thanks in advance!
I got it working (without ORDER BY loc_id ASC) as long as I set phpMyAdmin operations loc_id ascending. But whenever I delete all data, it goes back as loc_id descending so I have to reset it. It doesn't entirely solve the problem but I guess this is as far as I can go. :)) I just have to make sure that the table column loc_id is always in ascending order. Thank you everyone for your help! I really appreciate it! But if you have any better answer, like how to set the table column always in ascending order or better query, etc, feel free to post it here. May God bless you all!
The database server is allowed to rewrite your query to optimize its execution. This might affect the order of the individual parts, in particular the order in which the various assignments are executed. I assume that some such reodering causes the result of the query to become undefined, in such a way that it works on sqlfiddle but not on your actual production system.
I can't put my finger on the exact location where things go wrong, but I believe that the core of the problem is the fact that SQL is intended to work on relations, but you try to abuse it for sequential programming. I suggest you retrieve the data from the database using portable SQL without any variable hackery, and then use PHP to perform any post-processing you might need. PHP is much better suited to express the ideas you're formulating, and no optimization or reordering of statements will get in your way there. And as your query currently only results in a single value, fetching multiple rows and combining them into a single value in the PHP code shouldn't increase complexety too much.
Edit:
While discussing another answer using a similar technique (by Omesh as well, just as the answer your code is based upon), I found this in the MySQL manual:
As a general rule, you should never assign a value to a user variable
and read the value within the same statement. You might get the
results you expect, but this is not guaranteed. The order of
evaluation for expressions involving user variables is undefined and
may change based on the elements contained within a given statement;
in addition, this order is not guaranteed to be the same between
releases of the MySQL Server.
So there are no guarantees about the order these variable assignments are evaluated, therefore no guarantees that the query does what you expect. It might work, but it might fail suddenly and unexpectedly. Therefore I strongly suggest you avoid this approach unless you have some relaibale mechanism to check the validity of the results, or really don't care about whether they are valid.

How to tune the following MySQL query?

I am using the following MySQL query which is working fine, I mean giving me the desired output but... lets first see the query:
select
fl.file_ID,
length(fl.filedesc) as l,
case
when
fl.part_no is null
and l>60
then concat(fl.fileno,' ', left(fl.filedesc, 60),'...')
when
fl.part_no is null
and length(fl.filedesc)<=60
then concat(fl.fileno,' ',fl.filedesc)
when
fl.part_no is not null
and length(fl.filedesc)>60
then concat(fl.fileno,'(',fl.part_no,')', left(fl.filedesc, 60),'...')
when
fl.part_no is not null
and length(fl.filedesc)<=60
then concat(fl.fileno,'(',fl.part_no,')',fl.filedesc)
end as filedesc
from filelist fl
I don't want to use the length function repeatedly because I guess it would hit the database everytime claiming performance issue. Please suggest if I can store the length once and use it several times.
Once you have accessed a given row, what you do with the columns has only a small impact on performance. So your guess that it "hits the database" more to serve repeated use of that length function isn't as bad as you think.
The analogy I would use is a postal carrier delivering mail to your house, which is miles outside of town. He drives for 20 minutes to your mailbox, and then he worries that it takes too much time to insert one letter at a time into your mailbox, instead of all the letters at once. The cost of that inefficiency is insignificant compared to the long drive.
That said, you can make the query more concise or easier to code or to look at. But this probably won't have a big benefit for performance.
select
fl.file_ID,
concat(fl.fileno,
ifnull(concat('(',fl.part_no,')'), ' '),
left(fl.filedesc,60),
if(length(fl.filedesc)>60,'...','')
) as filedesc
from filelist fl