Microstrategy - Create simple graph - csv

I've a very simple table (CSV) that I've imported into mycrostrategy to visualize my data.
The data represents the points of certain teams (e.g. football) for each played round (px).
csv:
#teams, p1, p2, p3, p4
Team1, 3, 6, 6, 7
Team2, 0, 0, 3, 4
Team3, 3, 6, 9, 12
Team4, 1, 4, 7, 8
The expected graph ( I did this one in Excel) is attached.
This seems very simple to me to do, but so far, I could not figure out how to organize the data (attributes/metrics, etc) to create this visualization.
Any idea? Is there any metric that I've to create (like max number Points)..
EDIT:
The best I could do was a 'discrete' representation (see attach) of the data with Dots, I'd like to have a line/continuous.
Solution with dots instead of a line connecting the dots
Thanks

You need to import your data using the Crosstab option.
You should create two attributes:
Team (team1, team2, team3, team4)
Player Rounds (p1, p2, p3, p4)
And a Points metric.
With these three elements you should be able to create your graph easily.

Related

Formatting my csv file for creating boxplots with python

I have a probably quite simple problem as I'm a beginner.
I want to create boxplots in python using seaborn, but I'm having difficulties setting up my csv file and my code to get the result I want.
I have samples that have been treated with 7 different treatments in triplicates (in total 21 columns)
For each sample I have measured the content of 43 different compounds (43 rows)
--> My goal is to create one graph for each of the compounds (43 graphs) showing seven boxplots (7 treatments) made up of the triplicate measurements, respectively.
This is what my data looks like for better understanding:
Compounds, Treatment A (Replicate 1), Treatment A (Replicate 2), Treatment A (Replicate 3), Treatment B (Replicate 1), ..., Treatment G (Replicate 3)
Compound 1
Compound 2
...
Compound 43
I would be very thankful for help! :-)
Yours,
Natalie

reinforcement learning model design - how to add upto 5

I am experimenting with reinforcement learning in python using Keras. Most of the tutorials available use OpenAI Gym library to create the environment, state, and action sets.
After practicing with many good examples written by others, I decided that I want to create my own reinforcement learning environment, state, and action sets.
This is what I think will be fun to teach the machine to do.
An array of integers from 1 to 4. I will call these targets.
targets = [[1, 2, 3, 4]]
Additional numbers list (at random) from 1 to 4. I will call these bullets.
bullets = [1, 2, 3, 4]
When I shoot a bullet to a target, the target's number will be the sum of original target num + bullet num.
I want to shoot a bullet (one at a time) at one of the targets to make
For example, given targets [1 2 3 4] and bullet 1, I want the machine to predict the correct index to shoot at.
In this case, it should be index 3, because 4 + 1 = 5
curr_state = [[1, 2, 3, 4]]
bullet = 1
action = 3 (<-- index of the curr_state)
next_state = [[1, 2, 3, 5]]
I have been picking my brain to think of the best way to construct this into a reinforcement design. I tried some, but the model result is not very good (meaning, it most likely fails to make number 5).
Mostly because the state is a 2D: (1) targets; (2) bullet at that time. The method I employed so far is to convert the state as the following:
State = 5 - targets - bullet
I was wondering if anyone can think of a better way to design this model?
Thanks in advance!
Alright, so it looks like no one is helping you out, so I just wrote a Python environment file for you as you described. I also made it as much OpenAI style for you as possible, here is the link to it, it is in my GitHub repository. You can copy the code or fork it. I will explain it below:
https://github.com/RuiNian7319/Miscellaneous/blob/master/ShootingRange.py
States = [0, 1, 2, ..., 10]
Actions = [-2, -1, 0, 1, 2]
So the game starts at a random number between 0 - 10 (you can change this easily if you want), and the random number is your "target" you described above. Given this target, your AI agent can fire the gun, and it shoots bullets corresponding to the numbers above. The objective is for your bullet and the target to add up to 5. There are negatives in case your AI agent overshoots 5, or if the target is a number above 5.
To get a positive reward, the agent has to get 5. So if the current value is 3, and the agent shoots 2, then the agent will get a reward of 1 since he got the total value of 5, and that episode will end.
There are 3 ways for the game to end:
1) Agent gets 5
2) Agent fails to get 5 in 15 tries
3) The number is above 10. In this case, we say the target is too far
Sometimes, you need to shoot multiple times to get 5. So, if your agent shoots, its current bullet will be added to the state, and the agent tries again from that new state.
Example:
Current state = 2. Agent shoots 2. New state is 4. And the agent starts at 4 at the next time step. This "sequential decision making" creates a reinforcement learning environment, rather than a contextual bandit.
I hope this makes sense, let me know if you have any questions.

Convolutional neural network concept

Please go to the link http://scs.ryerson.ca/~aharley/vis/conv/flat.html
and draw a number in the box provided to see through various layers.
Now if you scroll through the different squares of the layers, you can see how your square is related to other squares of previous layers.
Now my doubt is, according to cs231n lecture 7, http://cs231n.github.io/convolutional-networks/ a filter has the same depth as the depth of the respective layer, and number of filters is equal to the depth of the succeeding layer. But if you go through the convolution layer 2, you can see that the particular square of a particular layer is only obtained from some of the squares of the preceding layer. I am trying to understand the concept here. Please help.
The following dimensions are according to (N, C, H, W).
pool1('6', 14, 14)
|
| kernel(16, '6', 5, 5)
v
conv2(16, 10, 10)
|
| kernel(2, 2), stride(2)
v
pool2(16, 5, 5)
Pool1 output 6 feature maps which are the input of Conv2. Accordingly, Conv2 has 16 kernels(which will generate 16 feature maps) and each of them has same depth or channel with Pool1 which is 6(surrounded by single quotes).

Nested Sets - Bottom Up Approach

From the nested sets reference document written by Mike Hyller and other blogs, I could understand how hierarchies are being managed in RDBMS. I was also able to successfully implement the model for one of my projects. I am currently working on a problem which also has hierarchy, but the nodes are built from the bottom. I am using MySQL.
Consider I have 10 objects, I initially create rows for them in a table. Then, there is a table which has the left and right values that are required for implementing the nested sets model. So in this table, I group these 10 objects into two sets, say two bags, 5 objects in one bag and other 5 objects in one bag (based on some logic). Now these two bags are grouped together to form a bigger bag. Likewise, such bags are grouped together to form a big container.
I hope the example is clear to you to get an idea of what I am trying to achieve here. This is the opposite of applying the traditional nested set model where I build the sets from the top.
Can you please suggest me whether nested sets can be applied here? If yes, will changing the update query during insertion be sufficient to form the entire hierarchy? If you don't suggest, what other techniques can be used to tackle such problems?
Nested sets model works for any hierarchy, as long as it's non-overlapping (i.e. one child can have at most one parent).
Your model seems to have a predefined hierarchy ("objects", "bags" and "containers" being different entities with different properties). If it's the case indeed, you don't need nested sets at all, a simple set of foreign key constraints will suffice.
If it's not though (say, if a "bag" can be promoted to a "container", or there can be "containers" containing other "containers" etc.), you will need to have some kind of a hierarchy model indeed, and nested sets can serve as one as well.
One way to implement one would be to add references to you "bags" or "containers" or whatever to the table which holds your left and right values for your "objects":
CREATE TABLE nested_sets
(
ref BIGINT NOT NULL,
type INT NOT NULL -- 1 = object, 2 = set, 3 = bag
left BIGINT,
right BIGINT
)
INSERT
INTO nested_sets
VALUES (1, 1, 1, 1),
(2, 1, 2, 2),
(3, 1, 3, 3), -- 3 objects in bag 1
(4, 1, 4, 4),
(5, 1, 5, 5),
(6, 1, 6, 6), -- 3 objects in bag 2
(1, 2, 1, 3), -- bag 1, containing objects 1 to 3
(2, 2, 4, 6), -- bag 2, containing objects 4 to 6
(1, 3, 1, 6), -- container 1, containing bags 1 and 2 and, by extension, objects 1 to 6
You may also want to move left and right fields from the nested_sets table to the main tables describing the entities, or, alternatively, you may want to move all entities into a single table. This depends on how rigid your definitions of "bag", "container" and "object" are.

What is the name of this data structure or technique of using relative difference between sequence members

Let's say I have a sequence of values (e.g., 3, 5, 8, 12, 15) and I want to occasionally decrease all of them by a certain value.
If I store them as the sequence (0, 2, 3, 4, 3) and keep a variable as a base of 3, I now only have to change the base (and check the first items) whenever I want to decrease them instead of actually going over all the values.
I know there's an official term for this, but when I literally translate from my native language to English it doesn't come out right.
Differential Coding / Delta Encoding?
I don't know a name for the data structure, but it's basically just base+offset :-)
An offset?
If I understand your question right, you're rebasing. That's normally used in reference to patching up addresses in DLLs from a load address.
I'm not sure that's what you're doing, because your example seems to be incorrect. In order to come out with { 3, 5, 8, 12, 15 }, with a base of 3, you'd need { 0, 2, 5, 9, 12 }.
I'm not sure. If you imagine your first array as providing the results of some function of an index value f(i) where f(0) is 3, f(1) is 5, and so forth, then your second array is describing the function f`(i) where f(i+1) = f(i) + f'(i) given f(0) = 3.
I'd call it something like a derivative function, where the process of retrieving your original data is simply the summation function.
What will happen more often, will you be changing f(0) or retrieving values from f(i)? Is this technique rooted in a desire to optimize?
Perhaps you're looking for a term like "Inductive Sequence" or "Induction Sequence." (I just made that up.)