Complex joins with Peewee - mysql

It's rather embarrassing to ask this question, since it seems to be so trivial, yet I can't find a solution that works.
I have the following function:
def planner(departure_id, arrival_id):
departure = Stop.get(Stop.id == departure_id)
arrival = Stop.get(Stop.id == arrival_id)
buses = Bus.select().join(RideStopRelationship).join(Stop).where(Stop.id == departure)
for bus in buses:
print bus.line
for stop in bus.stops:
print stop.time, stop.stop.name
based on the following models:
class Stop(BaseModel):
name = CharField()
#lat = FloatField()
#lng = FloatField()
class Bus(BaseModel):
line = IntegerField()
number = IntegerField()
direction = IntegerField()
class RideStopRelationship(BaseModel):
bus = ForeignKeyField(Bus, related_name = "stops")
stop = ForeignKeyField(Stop, related_name = "buses")
time = TimeField()
The crucial line is Bus.select().join(RideStopRelationship).join(Stop).where(Stop.id == departure). I'm trying to get all buses that will stop at both departure and arrival. However, the above query returns all buses that stop at departure. How would I get buses that stop at both 'departure' and 'arrival'?
If I'm making this too complicated (either my models being too complicated, or my query), feel free to correct me.
EDIT:
There's one way that does work:
buses_departure = Bus.select().join(RideStopRelationship).join(Stop).where(Stop.id == departure)
buses_arrival = Bus.select().join(RideStopRelationship).join(Stop).where(Stop.id == arrival)
buses = Bus.select().where(Bus.id << buses_departure & Bus.id << buses_arrival)
but it's rather long for what should be a simply query...

You might try something like this:
departure = Stop.get(...)
arrival = Stop.get(...)
query = (Bus
.select(Bus)
.join(RideStopRelationship)
.where(RideStopRelationship.stop << [departure, arrival])
.group_by(Bus)
.having(fn.Count(Bus.id) == 2))
Unrelated, but one thing to note is that due to the way python evaluates operators, you need to put parentheses around your in queries:
buses = Bus.select().where(
(Bus.id << buses_departure) &
*Bus.id << buses_arrival))

Related

Applying Periodic Boundary Conditions for u(x,t) array

So I think I applied the periodic boundary conditions incorrectly. This the Lax Wendroff method
def LW_hflux_eq(a,c,delt,delx,u0,flux):
x = np.linspace(0,L,round(L/delx))
t = np.linspace(0,(L/a)/4,round(((L/a)/4)/delt))
u_arr = np.zeros((len(x),len(t)+1))
# Intial Condition
u_arr[:,0] = u0
countx = np.arange(1,len(x)-1)
countt = np.arange(0,len(t))
#Lax-Wendroff (no limiter)
for l in countt:
for j in countx:
u_arr[j,l+1] = u_arr[j,l] - c*(u_arr[j,l]+(((1-c)/2)*(u_arr[j+1,l]-u_arr[j,l])*flux) - (u_arr[j-1,l]+((1-c)/2)*(u_arr[j,l]-u_arr[j-1,l])*flux))
u_arr[-1,l+1] = u_arr[-2,l]
u_arr[0,l+1] = u_arr[-1,l]
return u_arr
The last two lines before the return are the PBC. Am i doing this correctly? I am getting weird errors done the road when applying this function. Trying to find the root cause of it.
Thanks!

Writing a circuit in ZoKrates to proof age is over 21 years

I am trying to see if I can use ZoKrates in a scenario where a user can prove to the verifier that age is over 21 years without revealing the date of birth. I think its a good use case for zero-knowledge proof but like to understand the best way to implement it.
The circuit code (sample) takes the name of the user as public input(name attestation is done by a trusted authority like DMV and is a most likely combination of offline/online mechanism), then the date of birth which is a private input.
//8297122105 = "Razi" is decimal.
def main(pubName,private yearOfBirth, private centuryOfBirth):
x = 0
y = 0
z = 0
x = if centuryOfBirth == 19 then 1 else 0 fi
y = if yearOfBirth < 98 then 1 else 0 fi
z = if pubName == 8297122105 then 1 else 0 fi
total = x + y + z
result = if total == 3 then 1 else 0 fi
return result
Now, using ./target/release/zokrates generate-proof command get the output that can be used as an input toverifier.sol.
A = Pairing.G1Point(0x24cdd31f8e07e854e859aa92c6e7f761bab31b4a871054a82dc01c143bc424d, 0x1eaed5314007d283486826e9e6b369b0f1218d7930cced0dd0e735d3702877ac);
A_p = Pairing.G1Point(0x1d5c046b83c204766f7d7343c76aa882309e6663b0563e43b622d0509ac8e96e, 0x180834d1ec2cd88613384076e953cfd88448920eb9a965ba9ca2a5ec90713dbc);
B = Pairing.G2Point([0x1b51d6b5c411ec0306580277720a9c02aafc9197edbceea5de1079283f6b09dc, 0x294757db1d0614aae0e857df2af60a252aa7b2c6f50b1d0a651c28c4da4a618e], [0x218241f97a8ff1f6f90698ad0a4d11d68956a19410e7d64d4ff8362aa6506bd4, 0x2ddd84d44c16d893800ab5cc05a8d636b84cf9d59499023c6002316851ea5bae]);
B_p = Pairing.G1Point(0x7647a9bf2b6b2fe40f6f0c0670cdb82dc0f42ab6b94fd8a89cf71f6220ce34a, 0x15c5e69bafe69b4a4b50be9adb2d72d23d1aa747d81f4f7835479f79e25dc31c);
C = Pairing.G1Point(0x2dc212a0e81658a83137a1c73ac56d94cb003d05fd63ae8fc4c63c4a369f411c, 0x26dca803604ccc9e24a1af3f9525575e4cc7fbbc3af1697acfc82b534f695a58);
C_p = Pairing.G1Point(0x7eb9c5a93b528559c9b98b1a91724462d07ca5fadbef4a48a36b56affa6489e, 0x1c4e24d15c3e2152284a2042e06cbbff91d3abc71ad82a38b8f3324e7e31f00);
H = Pairing.G1Point(0x1dbeb10800f01c2ad849b3eeb4ee3a69113bc8988130827f1f5c7cf5316960c5, 0xc935d173d13a253478b0a5d7b5e232abc787a4a66a72439cd80c2041c7d18e8);
K = Pairing.G1Point(0x28a0c6fff79ce221fccd5b9a5be9af7d82398efa779692297de974513d2b6ed1, 0x15b807eedf551b366a5a63aad5ab6f2ec47b2e26c4210fe67687f26dbcc7434d);
Question
Consider a scenario when a user (say Razi) can take the proof above (probably in a form of a QR code) and scan it on a machine (confirms age is over 21) that will run the verifierTx method on the contract. Since the proof explicitly has "Razi" inside the proof and contract can verify the age without knowing the actual date of birth we get a better privacy. However, the challenge is now anyone else can reuse the proof since it was used within the transaction. One way to mitigate this issue is to make sure that either the proof is valid for a limited time or (just may good for one-time use). Another way is to ensure proof of user's identity ("Razi"), in a way that is satisfied beyond doubt (e.g. by confirming identity on blockchain etc.).
Are there ways to make sure proof can be used by a user more than once?
I hope the question and explanation make sense. Happy to elaborate more on this, so let me know.
What you will need is:
Razi owning an ethereum public/private key
a (salted) fingerprint fact (e.g. birthday as unix timestamp) associated with Razi's public ethereum address and endorsed on-chain by an authority
Now you can write a ZoKrates program like this
def main(private field salt, private field birthdayAsUnixTs, field pubFactHashA, field pubFactHashB, field ts) -> (field)
// check that the fact is corresponding to the endorsed salted fact fingerprint onchain
h0, h1 = sha256packed(0,0,salt,birthdayAsUnixTs)
h0 == pubFactHashA
h1 == pubFactHashB
// 18 years is pseudo code only!
field ok = if birthdayAsUnixTs * 18 years <= ts then 1 else 0 fi
return ok
Now in your contract you can
check that msg.sender is the owner of the endorsed fact
require(ts <= now)
call verifier with the proof and public input: (factHash, ts, 1)
You can do that by hashing the proof and adding that hash in a list of "used proofs", so no one can use it again.
Now, ZoKrates add randomness in the generation of the proof in order to prevent revealing that the same witnesss has been used, since zkproofs do not show anything about the witness. So, if you want to prevent the person to use his credential (accredit that he is over 21 years old ) more than once you have to use a nullifier (See ZCash approach in the "How zk-SNARKs are applied to create a shielded transaction" part).
Basically you use a string with the data of Razi nullifier_string = centuryOfBirth+yearOfBirth+pubName and then you publish it Hash nullifier = H(nullifier_string) in a table of revealed nullifiers. In the ZoKrates scheme you have to add the nullifier as a public input and then verify that the nullifier corresponds to the data provided. Something like this:
import "utils/pack/unpack128.code" as unpack
import "hashes/sha256/256bitPadded.code" as hash
import "utils/pack/nonStrictUnpack256.code" as unpack256
def main(pubName,private yearOfBirth, private centuryOfBirth, [2]field nullifier):
field x = if centuryOfBirth == 19 then 1 else 0 fi
field y = if yearOfBirth < 98 then 1 else 0 fi
field z = if pubName == 8297122105 then 1 else 0 fi
total = x + y + z
result = if total == 3 then 1 else 0 fi
null0 = unpack(nullifier[0])
null1 = unpack(nullifier[1])
nullbits = [...null0,...null1]
nullString = centuryOfBirth+yearOfBirth+pubName
unpackNullString = unpack256(nullString)
nullbits == hash(unpackNullString)
return result
This has to be made in order to prevent that Razi provide a random nullifier unrelated to his data.
Once you had done this, you can check if the nullifier provided has been already used if it is registered in the revealed nullifier table.
The problem with this in your case is that the year of birth is a weak number to hash. Someone can do a brute-force attack to the nullifier and reveal the year of birth of Razi. You have to add a strong number in the verification (Razi secret ID? a digital signature?) to prevent this attack.
Note1: I have an old version of ZoKrates, so check the import path right.
Note2: Check the ZoKrates Hash function implementation, you may have problem with the padding of the inputs, the unpack256 function prevent this I suppose, but you can double check this to prevent bugs.

Include nested entity details but don't group by then when grouping by other fields

I working with Database first C# MVC, EF6, LINQ and JSon to try and pass data to both Highcharts and Google Maps for some of my reporting.
If I could add an image I would show you the relevant portion of my model, but sadly I need more reputation to do that...
The portion of the Entity Model I'm concentrating on right now is based on a central Docket that contains a BuildingCode as part of a one-to-many relationship to a building with and address and further relationship to the buildings polygons (for mapping). Dockets are also classified by one or more DocketTypes and thus there is a many-to-many relationship between Dockets and DocketTypes, which is not directly exposed to through the EF.
As an example a Docket which represents an investigation, could be related to the theft of a mobile phone in building A located on Campus X, not only was the cellphone stolen but the assailant also assaulted the victim in order to steal the mobile phone. So there are 2 DocketTypes here 1. Theft of mobile phone and 2. assault. Note: this is fictitious and for illustration purposes only .
One of my fundamental reports requires that I count how many docketTypes affect each building and each campus in a given period. When I display this I also need to show what the DocketTypes are.
I have no end of nightmare trying to find a way to get this right, I keep running into circular reference errors and needing to use explicit conversions when trying to model the data with LINQ so that I can pass a single nested object through JSON to the client side where displaying will occur.
In the below code I am told I need an Explicit conversion:
Cannot implicitly convert type 'Campus_Investigator.ViewModels.DocketTypeViewModel' to 'System.Collections.Generic.IEnumerable<Campus_Investigator.ViewModels.DocketTypeViewModel>'. An explicit conversion exists (are you missing a cast?)
var currentDocketQuery = from d in db.Dockets
from dt in d.DocketTypes
from bp in d.BuildingDetail.BuildingPolygons
where d.OccurrenceStartDate >= datetime && d.BuildingDetail.CampusName == Campus
select new CampusBuildingDocketTypeViewModel()
{
BuildingCode = d.BuildingDetail.BuildingCode,
BuildingName = d.BuildingDetail.BuildingName,
//BuildingPolygons = d.BuildingDetail.BuildingPolygons,
DocketTypes = new DocketTypeViewModel()
{
Category = dt.Category,
SubCategory = dt.SubCategory,
ShortDescription = dt.ShortDescription
}
};
I appreciate any ideas on how I can explicitly convert this or is that a better method I can use and avoid the circular reference error?
You included some redundant part in your query (which performs some inner join). The from bp in d.BuildingDetail.BuildingPolygons is joined in but then is not shown in the result. So it totally does not make sense. There may be duplicated elements in the result due to that. The from dt in d.DocketTypes is wrong joined in, although you need it in the result but because the DocketTypes is output per d in db.Dockets, so it's just simply queried like this:
var currentDocketQuery = from d in db.Dockets
where d.OccurrenceStartDate >= datetime && d.BuildingDetail.CampusName == Campus
select new CampusBuildingDocketTypeViewModel()
{
BuildingCode = d.BuildingDetail.BuildingCode,
BuildingName = d.BuildingDetail.BuildingName,
//BuildingPolygons = d.BuildingDetail.BuildingPolygons,
DocketTypes = d.DocketTypes
};
In fact I can see the commented line //BuildingPolygons = d.BuildingDetail.BuildingPolygons, so if you want to include that, it should also work.
If the DocketTypes has different type of d.DocketTypes, then you need a simple projection like this:
var currentDocketQuery = from d in db.Dockets
where d.OccurrenceStartDate >= datetime && d.BuildingDetail.CampusName == Campus
select new CampusBuildingDocketTypeViewModel()
{
BuildingCode = d.BuildingDetail.BuildingCode,
BuildingName = d.BuildingDetail.BuildingName,
//BuildingPolygons = d.BuildingDetail.BuildingPolygons,
DocketTypes = d.DocketTypes.Select(e => new DocketTypeViewModel()
{
Category = e.Category,
SubCategory = e.SubCategory,
ShortDescription = e.ShortDescription
})
};
I managed to solve this one by using the below. The major hassle with this is the circular referencing that exists in the model. When JSON serializes these, everything falls apart so it takes a lot of transforming to make sure that I only extract what I need. In this case grouped campus and building data (below includes the polygons which where only half commented out in the above) and then the include the detail of the DocketTypes that occurred at each building.
var datetime = DateTime.Now.AddDays(-30);
var campusDocket = from d in db.Dockets
where d.OccurrenceStartDate >= datetime && d.BuildingDetail.CampusName == Campus
group d by new { d.BuildingDetail.CampusName, d.BuildingDetail.BuildingCode, d.BuildingDetail.BuildingName } into groupdata
select new CampusBuildingDocketTypeViewModel
{
BuildingCode = groupdata.Key.BuildingCode,
BuildingName = groupdata.Key.BuildingName,
CampusName = groupdata.Key.CampusName,
Count = groupdata.Count(),
BuildingPolygons = from bp in db.BuildingPolygons
where bp.BuildingCode == groupdata.Key.BuildingCode
select new BuildingPolygonViewModel
{
Accuracy = bp.Accuracy,
BuildingCode = bp.BuildingCode,
PolygonOrder = bp.PolygonOrder,
Latitude = bp.Latitude,
Longitude = bp.Longitude
},
DocketTypes = from doc in db.Dockets
from dt in doc.DocketTypes
where doc.OccurrenceStartDate >= datetime && doc.BuildingCode == groupdata.Key.BuildingCode
select new DocketTypeViewModel
{
Category = dt.Category,
SubCategory = dt.SubCategory,
ShortDescription = dt.ShortDescription
}
};
The Answer again is ViewModels. I'm finding ViewModels seem to solve a lot of problems...

Keeping the variable 's value in recursive function, python 3.3

I managed to do it, some other way.
but I have a question, I had this code before
def jumphunt(start, mylist, count = 0):
if count < len(mylist):
place = mylist[start]
print(place)
if place == 0:
return True
elif start >= len(mylist) or start < 0:
return False
move_left = (start - place)
move_right = (start + place)
return jumphunt(move_right, mylist, count+1) or jumphunt(move_left, mylist, count+1)
else:
return False
but for some reason it's not trying both ways
to get to the last item on the list.
for example: [1,2,2,3,4,5,3,2,1,7,0] and ,start=mylist[0]
it supposed to jump like this (from 1-2-4-1-left to 2-left to 5-right to 0)
but it keeps trying to go right and then index is out of range etc.
I thought that if u use return of or this or that, it will try both until it reaches True, why won't it work here?
Thanks!
Include the value you want to keep as a default parameter for the method, like this:
def my_func(int, list, i=0):
a = (i + int)
if int == 0:
return True
elif a > len(list):
i -= int
else:
i += int
int = list[i]
my_func(int, list, i)
Bear in mind that it may not even always be possible to arrive at the end of the list doing the jumping pattern you describe, and even if it is possible, this method may choose the wrong branch.
A better algorithm would look like this:
def branching_search(list, start):
marks = [0]*len(list)
pos = start
while list[pos]!=0:
marks[pos]++
if marks[pos] % 2 == 0 and pos + list[pos] < len(list):
pos += list[pos]
elif marks[pos] % 2 == 1 and pos - list[pos] >= 0:
pos -= list[pos]
else:
return False
if all(item == 0 or item > 1 for item in list)
return False
return True
This way, if it comes to an item that it has already visited, it will decide to go the opposite direction that it went last time. Also, if it comes to an item that it can't leave without going out-of-bounds, or if there is not way to get to the end, it will give up and return.
EDIT: I realized there are a number of flaws in this algorithm! Although it is better than the first approach, it is not guaranteed to work, although the reasons are somewhat complicated.
Just imagine this array (the unimportant elements are left blank):
1, 2, , 5, , , , , 5, 0
The first two elements would get only one mark (thus the loop checking condition would not work), but it would still get stuck looping between the two fives.
Here is a method that will always work:
def flood_search(list):
marks = [[]]*len(list)
marks[0] = [0]
still_moving = True
while still_moving:
still_moving = False
for pos in range(0,len(list)):
if marks[pos]:
if pos + list[pos] < len(list) and not marks[pos + list[pos]]:
marks[pos + list[pos]] = marks[pos] + [list[pos]];
pos += list[pos]
still_moving = True
if pos - list[pos] >= 0 and not marks[pos - list[pos]]:
marks[pos - list[pos]] = marks[pos] + [-list[pos]];
pos -= list[pos]
still_moving = True
return marks[-1]
This works by taking every possible branch at the same time.
You can also use the method to get the actual route taken to get to the end. It can still be used as a condition, since it returns an empty list if no path is found (a falsy value), or a list containing the path if a path is found (a truthy value).
However, you can always just use list[-1] to get the last item.

Ruby on Rails optimalization of some code

I have some simple code that uses the minmax algoritm to locate birds. Everything works but I find my programming not good and I believe there is a better solution. I'm not that experienced in RoR but if somebody knows a better way to achieve the same solution then I'm greatful ;).
There are two parts I hate, the 4 lists I had to create to determine the max or min value for the different combinations (the core of the min-max algorithm) and the very ugly SQL hack.
Thanks!
def index
# fetch all our birds
#birds = Bird.all
# Loop over the birds
#birds.each do |bird|
#fixed = Node.where("d7type = 'f'")
xminmax = []
xmaxmin = []
yminmax = []
ymaxmin = []
#fixed.each do |fixed|
rss = Log.find_by_sql("SELECT logs.fixed_mac, AVG(logs.blinker_rss) AS avg_rss FROM logs
WHERE logs.blinker_mac = '#{bird.d7_mac}' AND logs.fixed_mac = '#{fixed.d7_mac}' ORDER BY logs.id DESC LIMIT 30")
converted_rss = calculate_distance_rss(rss[0].attributes["avg_rss"])
xminmax.push(fixed.xpos + converted_rss)
xmaxmin.push(fixed.xpos - converted_rss)
yminmax.push(fixed.ypos + converted_rss)
ymaxmin.push(fixed.ypos - converted_rss)
end
pos = {x: (xminmax.min + xmaxmin.max) / 2, y: (yminmax.min + ymaxmin.max) / 2}
puts pos
end
end
2 things you could do to start with is (assuming Birds could be a large table) Change Bird.all to
Bird.find_each do |bird|
... code ...
end
It's a more efficient way to loop over many table records.
2nd: take #fixed = Node.where("d7type = 'f'") out of the each loop since it doesn't need any variables for its query. Put it above the loop so it doesn't execute each time.
3rd (Not so much of an optimization but just safer code): Your Log.find_by_sql looks simple enough to use active_record, you can change it to:
Log.select('fixed_mac, AVG(logs.blinker_rss) AS avg_rss, blinker_mac').
where(blinker_mac: bird.d7_mac, fixed_mac: fixed.d7_mac).
order('id DESC').limit(30)
converted_rss = calculate_distance_rss(rss.first.avg_rss)
Everything else looks fine.