Tensorflow 1.1 Error : Attempt to reuse RNNCell with a different variable scope than its first use - deep-learning

I am trying to run the seq2seq translate example present on Tensorflow website and getting following error. I am using tensorflow-gpu==1.1.0
ValueError: Attempt to reuse RNNCell
with a different variable scope than its first use.
First use of cell was with scope
'embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/gru_cell',
this attempt is with scope 'embedding_attention_seq2seq/rnn/gru_cell'.
Please create a new instance of the cell if you would like it to use a
different set of weights. If before you were using:
MultiRNNCell([GRUCell(...)] * num_layers), change to:
MultiRNNCell([GRUCell(...) for _ in range(num_layers)]). If before
you were using the same cell instance as both the forward and reverse
cell of a bidirectional RNN, simply create two instances (one for
forward, one for reverse). In May 2017, we will start transitioning
this cell's behavior to use existing stored weights, if any, when it
is called with scope=None (which can lead to silent model degradation,
so this error will remain until then.)
On github people were suggesting to change the add the reuse argument to cell as follows :
def single_cell():
return tf.contrib.rnn.GRUCell(size, reuse = tf.get_variable_scope().reuse)
if use_lstm:
def single_cell():
return tf.contrib.rnn.BasicLSTMCell(size, reuse = tf.get_variable_scope().reuse)
cell = single_cell()
if num_layers > 1:
cell = tf.contrib.rnn.MultiRNNCell([single_cell() for _ in range(num_layers)])
But still I am getting the same error. What's the issue and how to resolve it ?
Any help is highly appreciated.
P.S: A similar post was there on stackoverflow,but that solution didn't work for me and since the version of TF is different, I created a new post.

Related

BentoML - Seving a CatBoostClassifier with cat_features

I am trying to create a BentoML service for a CatBoostClassifier model that was trained using a column as a categorical feature. If i save the model and I try to make some predictions with the saved model (not as a BentoML service) all works as expected, but when I create the service using BentML I get an error
_catboost.CatBoostError: Bad value for num_feature[non_default_doc_idx=0,feature_idx=2]="Tertiary": Cannot convert 'b'Tertiary'' to float
The value is found in a column named 'road_type' and the model was trained using 'object' as the data type for the column.
If I try to give a float or an integer for the 'road_type' column I get the following error
_catboost.CatBoostError: catboost/libs/data/model_dataset_compatibility.cpp:53: Feature road_type is Categorical in model but marked different in the dataset
If someone has encountered the same issue and found a solution I would appreciate it. Thanks!
I have tried different approaches for saving the model or loading the model but unfortunately it did not worked.
You can try to explicitly pass the cat_features to the bentoml runner.
It would be something like this:
from catboost import Pool
runner = bentoml.catboost.get("bentoml_catboost_model:latest").to_runner()
cat_features = [2] # specify your cat_features indexes
prediction = runner.predict.run(Pool(input_data, cat_features=cat_features))

textX: How to generate object names with ObjectProcessors?

I have a simple example model where I would like to generate names for the objects of the Position rule that were not given a name with as <NAME>. This is needed so that I can find them later with the built-in FQN scope provider.
My idea would be to do this in the position_name_generator object processor but that will be only be called after the whole model is parsed. I donĀ“t really understand the reason for that, since by the time I would need a Position object in the Project, the objects are already created, still the object processor will not be called.
Another idea would be to do this in a custom scope provider for Position.location which would then first do the name generation and then use the built-in FQN to find the Location object. Although this would work, I consider this hacky and I would prefer to avoid it.
What would be the textX way of solving this issue?
(Please take into account that this is only a small example. In reality a similar functionality is required for a rather big and complex model. To change this behaviour with the generated names is not possible since it is a requirement.)
import textx
MyLanguage = """
Model
: (locations+=Location)*
(employees+=Employee)*
(positions+=Position)*
(projects+=Project)*
;
Project
: 'project' name=ID
('{'
('use' use=[Position])*
'}')?
;
Position
: 'define' 'position' employee=[Employee|FQN] '->' location=[Location|FQN] ('as' name=ID)?
;
Employee
: 'employee' name=ID
;
Location
: 'location' name=ID
( '{'
(sub_location+=Location)+
'}')?
;
FQN
: ID('.' ID)*
;
Comment:
/\/\/.*$/
;
"""
MyCode = """
location Building
{
location Entrance
location Exit
}
employee Hans
employee Juergen
// Shall be referred to with the given name: "EntranceGuy"
define position Hans->Building.Entrance as EntranceGuy
// Shall be referred to with the autogenerated name: <Employee>"At"<LastLocation>
define position Juergen->Building.Exit
project SecurityProject
{
use EntranceGuy
use JuergenAtExit
}
"""
def position_name_generator(obj):
if "" == obj.name:
obj.name = obj.employee.name + "At" + obj.location.name
def main():
meta_model = textx.metamodel_from_str(MyLanguage)
meta_model.register_scope_providers({
"Position.location": textx.scoping.providers.FQN(),
})
meta_model.register_obj_processors({
"Position": position_name_generator,
})
model = meta_model.model_from_str(MyCode)
assert model, "Could not create model..."
if "__main__" == __name__:
main()
What is the textx way to solve this...
The use case you describe is to define the name of an object based on other model elements, including a reference to other model elements. This is currently not part of any test and use cases included in our test suite and the textx docu.
Object processors are executed at defined stages during model construction (see http://textx.github.io/textX/stable/scoping/#using-the-scope-provider-to-modify-a-model). In the described setup they are executed after reference resolution. Since the name to be defined/deduced itself is required for reference resolution, object processors cannot be used here (even if we allow to control when object processors are executed, before or after scope resolution, the described setup still will not work).
Given the dynamics of model loading (see http://textx.github.io/textX/stable/scoping/#using-the-scope-provider-to-modify-a-model), the solution is located within a scope provider (as you suggested). Here, we allow to control the order of reference resolution, such that references to the object being named by a custom procedure are postponed, until references required to deduce/define the name resolved.
Possible workaround
A preliminary sketch of how your use case can be solved is discussed in a https://github.com/textX/textX/pull/194 (with an attached issue https://github.com/textX/textX/issues/193). This textx PR contains a version of scoping.py you could probably use for your project (just copy and rename the module). A full-fledged solution could be part of the textx TEP-001, where we plan to make scoping more controllable to the end-user.
Playing around with this absolutely interesting issue revealed new aspects to me for the textx framework.
names dependent on model contents (involving unresolved references). This name resolution, which can be Postponed (in the referenced PR, see below), in terms of our reference resolution logic.
Even more interesting are the consequences of that: What happens to references pointing to locations, where unresolved names are found? Here, we must postpone the reference resolution process, because we cannot know if the name might match when resolved...
Your example is included: https://github.com/textX/textX/blob/analysis/issue193/tests/functional/test_scoping/test_name_resolver/test_issue193_auto_name.py

How to use Eiffel functions?

So I'm just starting to learn Eiffel. One of the first exercises in the book I'm using says to make a function that does base^exp without using ^. I've copied my code below.
class
APPLICATION
inherit
ARGUMENTS
create
make
feature {NONE} -- Initialization
make
-- Run application.
do
create power(2;3)
printf("2 to the power of 3 is " + answer)
end
power(base : REAL; exp : INTEGER) : REAL
-- computers base raised to the bower of exp without using ^
local
remain : INTEGER
do
remain := exp
if remain = 0 then
result := 1
else
from
until
remain = 0
loop
result := result * result
remain := remain -1
end
end
end
end
How do I use this? Do I need it on the same level as feature{NONE}'s make? I know how I'm calling it is wrong, and I can't find anything in the chapter I just read, or online on how to pass parameters into it or how to use it's results.
There are several issues with the original code:
create is used to create an object, but you are not going to create anything, but to get a result of a computation of the function power by calling it. Therefore the keyword create is not needed.
You are using an entity answer to report the result of evaluation on a screen. However it is not declared anywhere. I believe the proper place would be a local variable declaration section.
The entity answer is not initialized to the result of the function power. This is usually done by an assignment instruction.
Feature arguments are separated by a comma, not by a semicolon.
From the original code it's unclear what is the type of the variable answer. Assuming it matches the type of the function power, before adding it to a string, it needs to be converted to a string. This is done by calling the feature out.
The standard feature for printing a string to a console is print, not printf.
Combining the critical points above, we get
make
-- Run application.
local
answer: REAL
do
answer := power(2, 3)
print ("2 to the power of 3 is " + answer.out)
end
After that the code can be compiled. Now less critical points:
It is a good style to put features to a dedicated feature clauses, so I would add a line like feature -- Basic operations before the feature power.
The implementation of the feature power has at least two problems. I'm not going to detail them here, but would give two hints instead:
by default numeric Result is initialized to 0, this needs to be taken into account for operations that use it without first assigning any other value
even though an argument base is passed to the function power it remains unused in the original version of the code

how to write a function which calls a simulink file in it

i just wrote a m-file with some defined input in which a simulink file is called.
it worked correctly
but when I'm going to define a function based on the same m-file ( so i can give multiple inputs to it) it give's me this error :
""
Invalid matrix-format variable specified as workspace input in 'blocks/From Workspace'. The matrix
must have two dimensions and at least two columns. Complex signals of any data type and non-double
real signals must be in structure format. The first column must contain time values and the
remaining columns the data values.
""
but i'm pretty sure that variable has 2 dimension and has twoo coloumns.
i don't have any idea what to do here.
what can i do here ?
Are you saying the mfile that runs your Simulink simulation works when the mfile is a script, but not when the mfile is a function? If so, this answer may provide some insight. Despite a preference for functions, I use scripts to run Simulink parameter studies - it was just easier to set up.

Organizing Notebooks & Saving Results in Mathematica

As of now I use 3 Notebook :
Functions
Where I have all the functions I created and call in the other Notebooks.
Transformation
Based on the original data, I compute transformations and add columns/List
When data is my raw data, I then call :
t1data : the result of the first transformation
t2data : the result of the second transformation
and so on,
I am yet at t20.
Display & Analysis
Using both the above I create Manipulate object that enable me to analyze the data.
Questions
Is there away to save the results of the Transformation Notebook such that t13data for example can be used in the Display & Analysis Notebooks without running all the previous computations (t1,t2,t3...t12) it is based on ?
Is there a way to use my Functions or transformed data without opening the corresponding Notebook ?
Does my separation strategy make sense at all ?
As of now I systematically open the 3 and have to run them all before being able to do anything, and it takes a while given my poor computing power and yet inefficient codes.
Saving variable states: can be done using DumpSave, Save or Put. Read back using Get or <<
You could make a package from your functions and read those back using Needs or <<
It's not something I usually do. I opt for a monolithic notebook containing everything (nicely layered with sections and subsections so that you can fold open or close) or for a package + slightly leaner analysis notebook depending on the weather and some other hidden variables.
Saving intermediate results
The native file format for Mathematica expressions is the .m file. This is human readable text format, and you can view the file in a text editor if you ever doubt what is, or is not being saved. You can load these files using Get. The shorthand form for Get is:
<< "filename.m"
Using Get will replace or refresh any existing assignments that are explicitly made in the .m file.
Saving intermediate results that are simple assignments (dat = ...) may be done with Put. The shorthand form for Put is:
dat >> "dat.m"
This saves only the assigned expression itself; to restore the definition you must use:
dat = << "dat.m"
See also PutAppend for appending data to a .m file as new results are created.
Saving results and function definitions that are complex assignments is done with Save. Examples of such assignments include:
f[x_] := subfunc[x, 2]
g[1] = "cat"
g[2] = "dog"
nCr = #!/(#2! (# - #2)!) &;
nPr = nCr[##] #2! &;
For the last example, the complexity is that nPr depends on nCr. Using Save it is sufficient to save only nPr to get a fully working definition of nPr: the definition of nCr will automatically be saved as well. The syntax is:
Save["nPr.m", nPr]
Using Save the assignments themselves are saved; to restore the definitions use:
<< "nPr.m" ;
Moving functions to a Package
In addition to Put and Save, or manual creation in a text editor, .m files may be generated automatically. This is done by creating a Notebook and setting Cell > Cell Properties > Initialization Cell on the cells that contain your function definitions. When you save the Notebook for the first time, Mathematica will ask if you want to create an Auto Save Package. Do so, and Mathematica will generate a .m file in parallel to the .nb file, containing the contents of all Initialization Cells in the Notebook. Further, it will update this .m file every time you save the Notebook, so you never need to manually update it.
Sine all Initialization Cells will be saved to the parallel .m file, I recommend using the Notebook only for the generation of this Package, and not also for the rest of your computations.
When managing functions, one must consider context. Not all functions should be global at all times. A series of related functions should often be kept in its own context which can then be easily exposed to or removed from $ContextPath. Further, a series of functions often rely on subfunctions that do not need to be called outside of the primary functions, therefore these subfunctions should not be global. All of this relates to Package creation. Incidentally, it also relates to the formatting of code, because knowing that not all subfunctions must be exposed as global gives one the freedom to move many subfunctions to the "top level" of the code, that is, outside of Module or other scoping constructs, without conflicting with global symbols.
Package creation is a complex topic. You should familiarize yourself with Begin, BeginPackage, End and EndPackage to better understand it, but here is a simple framework to get you started. You can follow it as a template for the time being.
This is an old definition I used before DeleteDuplicates existed:
BeginPackage["UU`"]
UnsortedUnion::usage = "UnsortedUnion works like Union, but doesn't \
return a sorted list. \nThis function is considerably slower than \
Union though."
Begin["`Private`"]
UnsortedUnion =
Module[{f}, f[y_] := (f[y] = Sequence[]; y); f /# Join###] &
End[]
EndPackage[]
Everything above goes in Initialization Cells. You can insert Text cells, Sections, or even other input cells without harming the generated Package: only the contents of the Initialization Cells will be exported.
BeginPackage defines the Context that your functions will belong to, and disables all non-System` definitions, preventing collisions. (There are ways to call other functions from your package, but that is better for another question).
By convention, a ::usage message is defined for each function that it to be accessible outside the package itself. This is not superfluous! While there are other methods, without this, you will not expose your function in the visible Context.
Next, you Begin a context that is for the package alone, conventionally "`Private`". After this point any symbols you define (that are not used outside of this Begin/End block) will not be exposed globally after the Package is loaded, and will therefore not collide with Global` symbols.
After your function definition(s), you close the block with End[]. You may use as many Begin/End blocks as you like, and I typically use a separate one for each function, though it is not required.
Finally, close with EndPackage[] to restore the environment to what it was before using BeginPackage.
After you save the Notebook and generate the .m package (let's say "mypackage.m"), you can load it with Get:
<< "mypackage.m"
Now, there will be a function UnsortedUnion in the Context UU` and it will be accessible globally.
You should also look into the functionality of Needs, but that is a little more advanced in my opinion, so I shall stop here.