From the nested sets reference document written by Mike Hyller and other blogs, I could understand how hierarchies are being managed in RDBMS. I was also able to successfully implement the model for one of my projects. I am currently working on a problem which also has hierarchy, but the nodes are built from the bottom. I am using MySQL.
Consider I have 10 objects, I initially create rows for them in a table. Then, there is a table which has the left and right values that are required for implementing the nested sets model. So in this table, I group these 10 objects into two sets, say two bags, 5 objects in one bag and other 5 objects in one bag (based on some logic). Now these two bags are grouped together to form a bigger bag. Likewise, such bags are grouped together to form a big container.
I hope the example is clear to you to get an idea of what I am trying to achieve here. This is the opposite of applying the traditional nested set model where I build the sets from the top.
Can you please suggest me whether nested sets can be applied here? If yes, will changing the update query during insertion be sufficient to form the entire hierarchy? If you don't suggest, what other techniques can be used to tackle such problems?
Nested sets model works for any hierarchy, as long as it's non-overlapping (i.e. one child can have at most one parent).
Your model seems to have a predefined hierarchy ("objects", "bags" and "containers" being different entities with different properties). If it's the case indeed, you don't need nested sets at all, a simple set of foreign key constraints will suffice.
If it's not though (say, if a "bag" can be promoted to a "container", or there can be "containers" containing other "containers" etc.), you will need to have some kind of a hierarchy model indeed, and nested sets can serve as one as well.
One way to implement one would be to add references to you "bags" or "containers" or whatever to the table which holds your left and right values for your "objects":
CREATE TABLE nested_sets
(
ref BIGINT NOT NULL,
type INT NOT NULL -- 1 = object, 2 = set, 3 = bag
left BIGINT,
right BIGINT
)
INSERT
INTO nested_sets
VALUES (1, 1, 1, 1),
(2, 1, 2, 2),
(3, 1, 3, 3), -- 3 objects in bag 1
(4, 1, 4, 4),
(5, 1, 5, 5),
(6, 1, 6, 6), -- 3 objects in bag 2
(1, 2, 1, 3), -- bag 1, containing objects 1 to 3
(2, 2, 4, 6), -- bag 2, containing objects 4 to 6
(1, 3, 1, 6), -- container 1, containing bags 1 and 2 and, by extension, objects 1 to 6
You may also want to move left and right fields from the nested_sets table to the main tables describing the entities, or, alternatively, you may want to move all entities into a single table. This depends on how rigid your definitions of "bag", "container" and "object" are.
Related
I am experimenting with reinforcement learning in python using Keras. Most of the tutorials available use OpenAI Gym library to create the environment, state, and action sets.
After practicing with many good examples written by others, I decided that I want to create my own reinforcement learning environment, state, and action sets.
This is what I think will be fun to teach the machine to do.
An array of integers from 1 to 4. I will call these targets.
targets = [[1, 2, 3, 4]]
Additional numbers list (at random) from 1 to 4. I will call these bullets.
bullets = [1, 2, 3, 4]
When I shoot a bullet to a target, the target's number will be the sum of original target num + bullet num.
I want to shoot a bullet (one at a time) at one of the targets to make
For example, given targets [1 2 3 4] and bullet 1, I want the machine to predict the correct index to shoot at.
In this case, it should be index 3, because 4 + 1 = 5
curr_state = [[1, 2, 3, 4]]
bullet = 1
action = 3 (<-- index of the curr_state)
next_state = [[1, 2, 3, 5]]
I have been picking my brain to think of the best way to construct this into a reinforcement design. I tried some, but the model result is not very good (meaning, it most likely fails to make number 5).
Mostly because the state is a 2D: (1) targets; (2) bullet at that time. The method I employed so far is to convert the state as the following:
State = 5 - targets - bullet
I was wondering if anyone can think of a better way to design this model?
Thanks in advance!
Alright, so it looks like no one is helping you out, so I just wrote a Python environment file for you as you described. I also made it as much OpenAI style for you as possible, here is the link to it, it is in my GitHub repository. You can copy the code or fork it. I will explain it below:
https://github.com/RuiNian7319/Miscellaneous/blob/master/ShootingRange.py
States = [0, 1, 2, ..., 10]
Actions = [-2, -1, 0, 1, 2]
So the game starts at a random number between 0 - 10 (you can change this easily if you want), and the random number is your "target" you described above. Given this target, your AI agent can fire the gun, and it shoots bullets corresponding to the numbers above. The objective is for your bullet and the target to add up to 5. There are negatives in case your AI agent overshoots 5, or if the target is a number above 5.
To get a positive reward, the agent has to get 5. So if the current value is 3, and the agent shoots 2, then the agent will get a reward of 1 since he got the total value of 5, and that episode will end.
There are 3 ways for the game to end:
1) Agent gets 5
2) Agent fails to get 5 in 15 tries
3) The number is above 10. In this case, we say the target is too far
Sometimes, you need to shoot multiple times to get 5. So, if your agent shoots, its current bullet will be added to the state, and the agent tries again from that new state.
Example:
Current state = 2. Agent shoots 2. New state is 4. And the agent starts at 4 at the next time step. This "sequential decision making" creates a reinforcement learning environment, rather than a contextual bandit.
I hope this makes sense, let me know if you have any questions.
First of all, I apologize for the bad English you are about to read...
I'm trying to develop a little e-commerce web application (from scratch - without using platforms like Magento, OpenCart, Shopify...) for a pizza delivery in the city where I live. The restaurant also sells some italian food, like pasta, fish and meat.
I'm stuck in a relational database problem, I will explain what I did in the database. I will write the tables structures followed by some examples records.
Unlike the pastas, the pizza's price varies according to size (an attribute).
The data will be displayed in the following way (please see the picture below):
Showing a pizza example record in the front-end. When the user selects the size, the price will be displayed below and two controls to add or substract the quantity of that product (with that size) will also be displayed.
This is the case of one pizza with one attribute (an attribute that affects the price), because there are some attributes that not affects the price, i.e: the cooking or doneness. Another case is that a product that have more than one attribute that affect the price.
In summary:
Product without attributes, it has an only one price.
Product with only one attribute that affect the price.
Product with two or more attributes that affect the price.
MySQL Tables:
Categories(ID, name):
1, Pizzas
2, Pastas
_
Products(ID, category_id, name, description)
1, 1, Margherita, Lorem ipsum
2, 1, 4 Stagioni, Lorem ipsum
3, 1, Capricciosa, Lorem ipsum
4, 2, Bologna, Lorem ipsum
5, 2, Pesto, Lorem ipsum
_
Attributes (ID, name)
1, Size
2, Cooking
_
Meta_attributes(ID, attribute_id, name)
1, 1, Small
2, 1, Medium
3, 1, Big
4, 2, Blue
5, 2, Medium well
6, 2, Well done
7, 2, Overcooked
_
meta_attributes_values(ID, product_id, meta_attrib_id, value)
1, 1, 1, 12
2, 1, 2, 16
3, 1, 3, 19
4, 2, 1, 14
5, 2, 2, 18
6, 2, 3, 20
_
In this schema, a product can have a value if and only if it has a meta_atrib and in order to have an meta_atrib it must have an attrib. But the pasta is "linear" it no have any attrib, for one pasta product there are only one price.
Questions:
How should be the database to handle all these cases?
What about special cases where one attribute influences another? For example, suppose that the price of a pizza varies according to its size (this is true), but suppose that it also has an attribute called "extra" and that the price of the extra attribute varies according to the size of the pizza, Because being larger will require more. I know that the example is not very clear but I hope I have made the case clear and express myself well.
Thanks for reading!
It's important to note that the schema you've described doesn't represent an actual order, it represents the abstract concept of a pizza. A graph of Products, Attributes, Meta-Attributes, and Values is far more complicated than an itemized order needs to be.
What's really in an order? There are Products, each of which has a base price; and there are, as you note, things which affect the price of a Product. These modifiers come in at least two types:
Additive modifiers tack on a flat sum to the base price. A "medium" pizza costs $4 above the base; a "large" costs $7 more.
Dependent modifiers change the base price after it's adjusted by additive modifiers. The simplest form of dependent modifier is a multiplier: whatever the adjusted price of a pizza is, one with "extra hot peppers" costs 0.10x more than that.
With any luck, that's all you have to deal with. If "extra peppers" instead costs $0.50 on a small, $0.60 on a medium, and $1.00 on a large, you have to track all three and correlate with the size modifier since the addition isn't a consistent function of adjusted price. Treating additive modifiers like size independently -- for example, by having base, medium, and large prices in Product -- may be more effective in that case.
It would be possible to achieve a simpler representation still by treating products and attributes identically and storing them in a single table with a foreign key to itself to represent the parent-child relationship. Effectively, you'd have no Products, only Attributes. And "Margherita" would be an Attribute that adds $12 to an item base price of $0.
But getting back to the concrete, if you need to track Orders with Order_Items too, even a one-row-per-attribute solution is unwieldy since you have a profusion of foreign keys in each line item of the order. In this case, it may be best to store your sub-items (or everything, if you roll it all into one table) in a JSON field, such that your Order_Items table looks like this:
id order_id subtotal attributes
1 1 17.60 [{"name": "Margherita", "adds": 12.00}, {"name": "Medium", "adds": 4}, {"name": "Extra hot peppers", "multiplier": 0.10}]
2 1 12.00 [{"name": "Pesto", "adds": 12.00}]
This is a) denormalized and b) breaks referential integrity. Both of these, in this instance, are good things! If you ever adjust prices or even take something off the menu, you don't want to screw up your bookkeeping or trip a foreign key constraint error.
I'm trying to determine if it is possible to easily model a directed cyclic graph with a closure table (and/or possibly other helper tables) in SQL.
For example, suppose I have this directed graph (all pointing down):
I'm having trouble modeling this with a closure table.
We would get this table:
(ancestor, descendant, path-length)
(1, 1, 0)
(2, 2, 0)
(3, 3, 0)
(4, 4, 0)
(2, 4, 1)
(3, 4, 1)
(1, 4, 2)
A closure table breaks down when removing the edge between 1 and 2.
DELETE FROM closure WHERE descendant IN
(SELECT descendant FROM closure WHERE ancestor=2);
DELETE FROM closure WHERE descendant=2 AND ancestor=1;
The first delete query removes paths between 1 and 4, and 3 and 4, which shouldn't be deleted
I can't find a solution to this with a closure table, and it get's further complicated if 4 were to point to 1. (becoming cyclic).
I haven't been able to find much written on this subject. I'd appreciate any input regarding how to implement this type of graph in SQL, or if SQL is simply not a good choice for this type of graph.
I've done cyclic graphs in a closure table before. It's much more expensive to delete edges but it can be done.
First of all you can forget about path-length. What's the path-length of a cycle? Infinity? Drop that column.
When you remove an edge (parent, child) from the graph you have to consider the possibility that there are alternate paths from parent's ancestors to child's children. So before deleting the edge save all of the parent's ancestor's children - these are the potential alternate paths. Then after you've deleted the edge re-add parent's ancestor's children to the closure table, excluding duplicate rows.
Here is a function I would like to write but am unable to do so. Even if you
don't / can't give a solution I would be grateful for tips. For example,
I know that there is a correlation between the ordered represantions of the
sum of an integer and ordered set partitions but that alone does not help me in
finding the solution. So here is the description of the function I need:
The Task
Create an efficient* function
List<int[]> createOrderedPartitions(int n_1, int n_2,..., int n_k)
that returns a list of arrays of all set partions of the set
{0,...,n_1+n_2+...+n_k-1} in number of arguments blocks of size (in this
order) n_1,n_2,...,n_k (e.g. n_1=2, n_2=1, n_3=1 -> ({0,1},{3},{2}),...).
Here is a usage example:
int[] partition = createOrderedPartitions(2,1,1).get(0);
partition[0]; // -> 0
partition[1]; // -> 1
partition[2]; // -> 3
partition[3]; // -> 2
Note that the number of elements in the list is
(n_1+n_2+...+n_n choose n_1) * (n_2+n_3+...+n_n choose n_2) * ... *
(n_k choose n_k). Also, createOrderedPartitions(1,1,1) would create the
permutations of {0,1,2} and thus there would be 3! = 6 elements in the
list.
* by efficient I mean that you should not initially create a bigger list
like all partitions and then filter out results. You should do it directly.
Extra Requirements
If an argument is 0 treat it as if it was not there, e.g.
createOrderedPartitions(2,0,1,1) should yield the same result as
createOrderedPartitions(2,1,1). But at least one argument must not be 0.
Of course all arguments must be >= 0.
Remarks
The provided pseudo code is quasi Java but the language of the solution
doesn't matter. In fact, as long as the solution is fairly general and can
be reproduced in other languages it is ideal.
Actually, even better would be a return type of List<Tuple<Set>> (e.g. when
creating such a function in Python). However, then the arguments wich have
a value of 0 must not be ignored. createOrderedPartitions(2,0,2) would then
create
[({0,1},{},{2,3}),({0,2},{},{1,3}),({0,3},{},{1,2}),({1,2},{},{0,3}),...]
Background
I need this function to make my mastermind-variation bot more efficient and
most of all the code more "beautiful". Take a look at the filterCandidates
function in my source code. There are unnecessary
/ duplicate queries because I'm simply using permutations instead of
specifically ordered partitions. Also, I'm just interested in how to write
this function.
My ideas for (ugly) "solutions"
Create the powerset of {0,...,n_1+...+n_k}, filter out the subsets of size
n_1, n_2 etc. and create the cartesian product of the n subsets. However
this won't actually work because there would be duplicates, e.g.
({1,2},{1})...
First choose n_1 of x = {0,...,n_1+n_2+...+n_n-1} and put them in the
first set. Then choose n_2 of x without the n_1 chosen elements
beforehand and so on. You then get for example ({0,2},{},{1,3},{4}). Of
course, every possible combination must be created so ({0,4},{},{1,3},{2}),
too, and so on. Seems rather hard to implement but might be possible.
Research
I guess this
goes in the direction I want however I don't see how I can utilize it for my
specific scenario.
http://rosettacode.org/wiki/Combinations
You know, it often helps to phrase your thoughts in order to come up with a solution. It seems that then the subconscious just starts working on the task and notifies you when it found the solution. So here is the solution to my problem in Python:
from itertools import combinations
def partitions(*args):
def helper(s, *args):
if not args: return [[]]
res = []
for c in combinations(s, args[0]):
s0 = [x for x in s if x not in c]
for r in helper(s0, *args[1:]):
res.append([c] + r)
return res
s = range(sum(args))
return helper(s, *args)
print partitions(2, 0, 2)
The output is:
[[(0, 1), (), (2, 3)], [(0, 2), (), (1, 3)], [(0, 3), (), (1, 2)], [(1, 2), (), (0, 3)], [(1, 3), (), (0, 2)], [(2, 3), (), (0, 1)]]
It is adequate for translating the algorithm to Lua/Java. It is basically the second idea I had.
The Algorithm
As I already mentionend in the question the basic idea is as follows:
First choose n_1 elements of the set s := {0,...,n_1+n_2+...+n_n-1} and put them in the
first set of the first tuple in the resulting list (e.g. [({0,1,2},... if the chosen elements are 0,1,2). Then choose n_2 elements of the set s_0 := s without the n_1 chosen elements beforehand and so on. One such a tuple might be ({0,2},{},{1,3},{4}). Of
course, every possible combination is created so ({0,4},{},{1,3},{2}) is another such tuple and so on.
The Realization
At first the set to work with is created (s = range(sum(args))). Then this set and the arguments are passed to the recursive helper function helper.
helper does one of the following things: If all the arguments are processed return "some kind of empty value" to stop the recursion. Otherwise iterate through all the combinations of the passed set s of the length args[0] (the first argument after s in helper). In each iteration create the set s0 := s without the elements in c (the elements in c are the chosen elements from s), which is then used for the recursive call of helper.
So what happens with the arguments in helper is that they are processed one by one. helper may first start with helper([0,1,2,3], 2, 1, 1) and in the next invocation it is for example helper([2,3], 1, 1) and then helper([3], 1) and lastly helper([]). Of course another "tree-path" would be helper([0,1,2,3], 2, 1, 1), helper([1,2], 1, 1), helper([2], 1), helper([]). All these "tree-paths" are created and thus the required solution is generated.
Let's say I have a sequence of values (e.g., 3, 5, 8, 12, 15) and I want to occasionally decrease all of them by a certain value.
If I store them as the sequence (0, 2, 3, 4, 3) and keep a variable as a base of 3, I now only have to change the base (and check the first items) whenever I want to decrease them instead of actually going over all the values.
I know there's an official term for this, but when I literally translate from my native language to English it doesn't come out right.
Differential Coding / Delta Encoding?
I don't know a name for the data structure, but it's basically just base+offset :-)
An offset?
If I understand your question right, you're rebasing. That's normally used in reference to patching up addresses in DLLs from a load address.
I'm not sure that's what you're doing, because your example seems to be incorrect. In order to come out with { 3, 5, 8, 12, 15 }, with a base of 3, you'd need { 0, 2, 5, 9, 12 }.
I'm not sure. If you imagine your first array as providing the results of some function of an index value f(i) where f(0) is 3, f(1) is 5, and so forth, then your second array is describing the function f`(i) where f(i+1) = f(i) + f'(i) given f(0) = 3.
I'd call it something like a derivative function, where the process of retrieving your original data is simply the summation function.
What will happen more often, will you be changing f(0) or retrieving values from f(i)? Is this technique rooted in a desire to optimize?
Perhaps you're looking for a term like "Inductive Sequence" or "Induction Sequence." (I just made that up.)