How to resume training in spacy transformers for NER - deep-learning

I have created a spacy transformer model for named entity recognition. Last time I trained till it reached 90% accuracy and I also have a model-best directory from where I can load my trained model for predictions. But now I have some more data samples and I wish to resume training this spacy transformer. I saw that we can do it by changing the config.cfg but clueless about 'what to change?'
This is my config.cfg after running python -m spacy init fill-config ./base_config.cfg ./config.cfg:
[paths]
train = null
dev = null
vectors = null
init_tok2vec = null
[system]
gpu_allocator = "pytorch"
seed = 0
[nlp]
lang = "en"
pipeline = ["transformer","ner"]
batch_size = 128
disabled = []
before_creation = null
after_creation = null
after_pipeline_creation = null
tokenizer = {"#tokenizers":"spacy.Tokenizer.v1"}
[components]
[components.ner]
factory = "ner"
incorrect_spans_key = null
moves = null
scorer = {"#scorers":"spacy.ner_scorer.v1"}
update_with_oracle_cut_size = 100
[components.ner.model]
#architectures = "spacy.TransitionBasedParser.v2"
state_type = "ner"
extra_state_tokens = false
hidden_width = 64
maxout_pieces = 2
use_upper = false
nO = null
[components.ner.model.tok2vec]
#architectures = "spacy-transformers.TransformerListener.v1"
grad_factor = 1.0
pooling = {"#layers":"reduce_mean.v1"}
upstream = "*"
[components.transformer]
factory = "transformer"
max_batch_items = 4096
set_extra_annotations = {"#annotation_setters":"spacy-transformers.null_annotation_setter.v1"}
[components.transformer.model]
#architectures = "spacy-transformers.TransformerModel.v3"
name = "roberta-base"
mixed_precision = false
[components.transformer.model.get_spans]
#span_getters = "spacy-transformers.strided_spans.v1"
window = 128
stride = 96
[components.transformer.model.grad_scaler_config]
[components.transformer.model.tokenizer_config]
use_fast = true
[components.transformer.model.transformer_config]
[corpora]
[corpora.dev]
#readers = "spacy.Corpus.v1"
path = ${paths.dev}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null
[corpora.train]
#readers = "spacy.Corpus.v1"
path = ${paths.train}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null
[training]
accumulate_gradient = 3
dev_corpus = "corpora.dev"
train_corpus = "corpora.train"
seed = ${system.seed}
gpu_allocator = ${system.gpu_allocator}
dropout = 0.1
patience = 1600
max_epochs = 0
max_steps = 20000
eval_frequency = 200
frozen_components = []
annotating_components = []
before_to_disk = null
[training.batcher]
#batchers = "spacy.batch_by_padded.v1"
discard_oversize = true
size = 2000
buffer = 256
get_length = null
[training.logger]
#loggers = "spacy.ConsoleLogger.v1"
progress_bar = false
[training.optimizer]
#optimizers = "Adam.v1"
beta1 = 0.9
beta2 = 0.999
L2_is_weight_decay = true
L2 = 0.01
grad_clip = 1.0
use_averages = false
eps = 0.00000001
[training.optimizer.learn_rate]
#schedules = "warmup_linear.v1"
warmup_steps = 250
total_steps = 20000
initial_rate = 0.00005
[training.score_weights]
ents_f = 1.0
ents_p = 0.0
ents_r = 0.0
ents_per_type = null
[pretraining]
[initialize]
vectors = ${paths.vectors}
init_tok2vec = ${paths.init_tok2vec}
vocab_data = null
lookups = null
before_init = null
after_init = null
[initialize.components]
[initialize.tokenizer]
As you can see there is a 'vectors' parameter under [initialize] so I tried giving vectors from 'model-best' like this:
But it gave me this error
OSError: [E884] The pipeline could not be initialized because the vectors could not be found at './model-best/ner'. If your pipeline was already initialized/trained before, call 'resume_training' instead of 'initialize', or initialize only the components that are new.
For those who are wondering that I have been given the wrong path. No, that directory exists. You can see directory structure,
So, please guide me on how I can successfully resume the training from previous weights.
Thank you!

The vectors setting is not related to the transformer or what you're trying to do.
In the new config, you want to use the source option to load the components from the existing pipeline. You would modify the [component] blocks to contain only the source setting and no other settings:
[components.ner]
source = "/path/to/model-best"
[components.transformer]
source = "/path/to/model-best"
See: https://spacy.io/usage/training#config-components

Vector sizes refer to word vectors here. To use the vocabulary from the previously trained Spacy pipeline, you can use the following the code:
[components.ner]
source = "/path/to/model-best"
[initialize]
vectors = ${paths.vectors}
[initialize.before_init]
#callbacks: "spacy.copy_from_base_model.v1"
tokenizer: "/path/to/model-best"
vocab: "/path/to/model-best"

Related

Why does my agent always takes a same action in DQN - Reinforcement Learning

I have trained an RL agent using DQN algorithm. After 20000 episodes my rewards are converged. Now when I test this agent, the agent is always taking the same action , irrespective of state. I find this very weird. Can someone help me with this. Is there a reason, anyone can think of why is the agent behaving this way?
Reward plot
When I test the agent
state = env.reset()
print('State: ', state)
state_encod = np.reshape(state, [1, state_size])
q_values = model.predict(state_encod)
action_key = np.argmax(q_values)
print(action_key)
print(index_to_action_mapping[action_key])
print(q_values[0][0])
print(q_values[0][action_key])
q_values_plotting = []
for i in range(0,action_size):
q_values_plotting.append(q_values[0][i])
plt.plot(np.arange(0,action_size),q_values_plotting)
Every time it gives the same q_values plot, even though state initialized is different every time.Below is the q_Value plot.
Testing:
code
test_rewards = []
for episode in range(1000):
terminal_state = False
state = env.reset()
episode_reward = 0
while terminal_state == False:
print('State: ', state)
state_encod = np.reshape(state, [1, state_size])
q_values = model.predict(state_encod)
action_key = np.argmax(q_values)
action = index_to_action_mapping[action_key]
print('Action: ', action)
next_state, reward, terminal_state = env.step(state, action)
print('Next_state: ', next_state)
print('Reward: ', reward)
print('Terminal_state: ', terminal_state, '\n')
print('----------------------------')
episode_reward += reward
state = deepcopy(next_state)
print('Episode Reward' + str(episode_reward))
test_rewards.append(episode_reward)
plt.plot(test_rewards)
Thanks.
Adding environment
import gym
import rom_vav_150mm_polyreg as rom
import numpy as np
import random
class VAVenv(gym.Env):
def __init__(self):
# Zone temperature set point and limits
self.temp_sp = 24
self.temp_sp_max = 24.5
self.temp_sp_min = 23.7
# no; of hours in an episode and time interval for each step
self.MAXSTEPS = 11
self.time_interval = 5./60. #in hrs
# constants
self.zone_volume = 775
def step(self,state,action):
# state -> Time, Volume, Load, SAT ,RAT
# action -> CFM
action_cfm = action[0]
# damper_opening = state[2]
load = state[2]
sat = state[3]
current_temp = state[4]
#input
inputs_rat = np.array([load,action_cfm, self.zone_volume,current_temp,sat])
'''
AFTER 5 MINUTES
'''
#output
output = [self.KStep + self.time_interval,self.zone_volume,rom.load(self.KStep + self.time_interval),
sat,rom.rat(inputs_rat)]
#reward calculation
thermal_coefficient = -0.1
zone_temperature = output[4]
if zone_temperature < self.temp_sp_min:
temp_penalty = self.temp_sp_min - zone_temperature
elif zone_temperature > self.temp_sp_max:
temp_penalty = zone_temperature - self.temp_sp_max
else :
temp_penalty = -10
reward = thermal_coefficient * temp_penalty
# create next step
next_state = np.array(output)
# increment simulation step count
self.KStep += self.time_interval
# done - end of one episode, when kSteps reaches the maximum steps in an episode
done = False
if self.KStep > self.MAXSTEPS:
done = True
return next_state,reward,done
def reset(self):
self.KStep = 0
# initialize all the values of a state
initial_rat = random.uniform(23,27)
initial_sat = random.uniform(12,14)
# return a state
return np.array([self.KStep,self.zone_volume,
rom.load(self.KStep),initial_sat,initial_rat])

How to read data from csv file in tensorflow?

I want to read data from csv file in tensorflow .So I've been trying out different ways of reading a CSV file with 2000 lines and each line with 93 features,and I hope to get one-hot value.
my dataset is like this:
the first column is data of 93 features,and the second column is labels of 16 one-hot .
this is my code
import tensorflow as tf
# data_input = pd.read_csv('ans_string.csv')
# data_train = pd.read_csv('ans_result.csv')
x = tf.placeholder(tf.float32,[None,93])
W = tf.Variable(tf.zeros([93,16]))
b = tf.Variable(tf.zeros([16]))
sess = tf.InteractiveSession()
filename_queue = tf.train.string_input_producer(["dataset.csv"])
reader = tf.TextLineReader()
key,value = reader.read(filename_queue)
# _,csv_row = reader.read(filename_queue)
# data = tf.decode_csv(csv_row,record_fefaults = rDeraults)
record_defaults_key = [[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1]]
record_defaults_value = [[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1]]
list_result_key = tf.decode_csv(key,record_defaults = record_defaults_key)
list_result_value = tf.decode_csv(value,record_defaults = record_defaults_value)
features = tf.stack(list_result_key)
labels = tf.stack(list_result_value)
y = tf.nn.softmax(tf.matmul(x,W)+b)
y_ = tf.placeholder(tf.float32,[None,16])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y),reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
with tf.Session() as sess:
# something happened
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord = coord)
tf.global_variables_initializer().run()
for _ in range (1000):
example,label = sess.run([features,labels])
print(sess.run(example,label))
sess.run(train_step,feed_dict={x:example,y_:label})
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
print(sess.run(accuracy.eval({x:example,y_:label})))
coord.request_stop()
coord.join(threads)
I want to train my model,but I got Error like this.
How can I fix it?

read data from luasql with utf8 format

I need to read data in my query with utf8 format, I tried to change collation of my SQL database when I read data base on English alphabet every thing good, but I have trouble in Arabic or other languages.
I print a string stored in variable came from in mysql query and show me like this ???????
how I can solve this problem to show them correct?
After retrieving UTF-8 strings from database, you should manually convert them to CP1256.
You can use function str:fromutf8() defined below
local char, byte, pairs, floor = string.char, string.byte, pairs, math.floor
local table_insert, table_concat = table.insert, table.concat
local unpack = table.unpack or unpack
local function unicode_to_utf8(code)
-- converts numeric UTF code (U+code) to UTF-8 string
local t, h = {}, 128
while code >= h do
t[#t+1] = 128 + code%64
code = floor(code/64)
h = h > 32 and 32 or h/2
end
t[#t+1] = 256 - 2*h + code
return char(unpack(t)):reverse()
end
local function utf8_to_unicode(utf8str, pos)
-- pos = starting byte position inside input string (default 1)
pos = pos or 1
local code, size = utf8str:byte(pos), 1
if code >= 0xC0 and code < 0xFE then
local mask = 64
code = code - 128
repeat
local next_byte = utf8str:byte(pos + size) or 0
if next_byte >= 0x80 and next_byte < 0xC0 then
code, size = (code - mask - 2) * 64 + next_byte, size + 1
else
code, size = utf8str:byte(pos), 1
end
mask = mask * 32
until code < mask
end
-- returns code, number of bytes in this utf8 char
return code, size
end
local map_1256_to_unicode = {
[0x80] = 0x20AC,
[0x81] = 0x067E,
[0x82] = 0x201A,
[0x83] = 0x0192,
[0x84] = 0x201E,
[0x85] = 0x2026,
[0x86] = 0x2020,
[0x87] = 0x2021,
[0x88] = 0x02C6,
[0x89] = 0x2030,
[0x8A] = 0x0679,
[0x8B] = 0x2039,
[0x8C] = 0x0152,
[0x8D] = 0x0686,
[0x8E] = 0x0698,
[0x8F] = 0x0688,
[0x90] = 0x06AF,
[0x91] = 0x2018,
[0x92] = 0x2019,
[0x93] = 0x201C,
[0x94] = 0x201D,
[0x95] = 0x2022,
[0x96] = 0x2013,
[0x97] = 0x2014,
[0x98] = 0x06A9,
[0x99] = 0x2122,
[0x9A] = 0x0691,
[0x9B] = 0x203A,
[0x9C] = 0x0153,
[0x9D] = 0x200C,
[0x9E] = 0x200D,
[0x9F] = 0x06BA,
[0xA0] = 0x00A0,
[0xA1] = 0x060C,
[0xA2] = 0x00A2,
[0xA3] = 0x00A3,
[0xA4] = 0x00A4,
[0xA5] = 0x00A5,
[0xA6] = 0x00A6,
[0xA7] = 0x00A7,
[0xA8] = 0x00A8,
[0xA9] = 0x00A9,
[0xAA] = 0x06BE,
[0xAB] = 0x00AB,
[0xAC] = 0x00AC,
[0xAD] = 0x00AD,
[0xAE] = 0x00AE,
[0xAF] = 0x00AF,
[0xB0] = 0x00B0,
[0xB1] = 0x00B1,
[0xB2] = 0x00B2,
[0xB3] = 0x00B3,
[0xB4] = 0x00B4,
[0xB5] = 0x00B5,
[0xB6] = 0x00B6,
[0xB7] = 0x00B7,
[0xB8] = 0x00B8,
[0xB9] = 0x00B9,
[0xBA] = 0x061B,
[0xBB] = 0x00BB,
[0xBC] = 0x00BC,
[0xBD] = 0x00BD,
[0xBE] = 0x00BE,
[0xBF] = 0x061F,
[0xC0] = 0x06C1,
[0xC1] = 0x0621,
[0xC2] = 0x0622,
[0xC3] = 0x0623,
[0xC4] = 0x0624,
[0xC5] = 0x0625,
[0xC6] = 0x0626,
[0xC7] = 0x0627,
[0xC8] = 0x0628,
[0xC9] = 0x0629,
[0xCA] = 0x062A,
[0xCB] = 0x062B,
[0xCC] = 0x062C,
[0xCD] = 0x062D,
[0xCE] = 0x062E,
[0xCF] = 0x062F,
[0xD0] = 0x0630,
[0xD1] = 0x0631,
[0xD2] = 0x0632,
[0xD3] = 0x0633,
[0xD4] = 0x0634,
[0xD5] = 0x0635,
[0xD6] = 0x0636,
[0xD7] = 0x00D7,
[0xD8] = 0x0637,
[0xD9] = 0x0638,
[0xDA] = 0x0639,
[0xDB] = 0x063A,
[0xDC] = 0x0640,
[0xDD] = 0x0641,
[0xDE] = 0x0642,
[0xDF] = 0x0643,
[0xE0] = 0x00E0,
[0xE1] = 0x0644,
[0xE2] = 0x00E2,
[0xE3] = 0x0645,
[0xE4] = 0x0646,
[0xE5] = 0x0647,
[0xE6] = 0x0648,
[0xE7] = 0x00E7,
[0xE8] = 0x00E8,
[0xE9] = 0x00E9,
[0xEA] = 0x00EA,
[0xEB] = 0x00EB,
[0xEC] = 0x0649,
[0xED] = 0x064A,
[0xEE] = 0x00EE,
[0xEF] = 0x00EF,
[0xF0] = 0x064B,
[0xF1] = 0x064C,
[0xF2] = 0x064D,
[0xF3] = 0x064E,
[0xF4] = 0x00F4,
[0xF5] = 0x064F,
[0xF6] = 0x0650,
[0xF7] = 0x00F7,
[0xF8] = 0x0651,
[0xF9] = 0x00F9,
[0xFA] = 0x0652,
[0xFB] = 0x00FB,
[0xFC] = 0x00FC,
[0xFD] = 0x200E,
[0xFE] = 0x200F,
[0xFF] = 0x06D2,
}
local map_unicode_to_1256 = {}
for code1256, code in pairs(map_1256_to_unicode) do
map_unicode_to_1256[code] = code1256
end
function string.fromutf8(utf8str)
local pos, result_1256 = 1, {}
while pos <= #utf8str do
local code, size = utf8_to_unicode(utf8str, pos)
pos = pos + size
code = code < 128 and code or map_unicode_to_1256[code] or ('?'):byte()
table_insert(result_1256, char(code))
end
return table_concat(result_1256)
end
function string.toutf8(str1256)
local result_utf8 = {}
for pos = 1, #str1256 do
local code = str1256:byte(pos)
table_insert(result_utf8, unicode_to_utf8(map_1256_to_unicode[code] or code))
end
return table_concat(result_utf8)
end
Usage is:
str:fromutf8() -- to convert from UTF-8 to cp1256
str:toutf8() -- to convert from cp1256 to UTF-8
Example:
-- This is cp1256 string
local str1256 = "1\128" -- "one euro" in cp1256
-- Convert it to UTF-8
local str_utf8 = str1256:toutf8() -- "1\226\130\172" -- one euro in utf-8
-- Convert it back from UTF-8 to cp1256
local str1256_2 = str_utf8:fromutf8()

keras Input layer (Nnoe,200,3), Why there is None?input have 3 dimensions, but got array with shape (200, 3)

The acc gyro in model.fit is (200 * 3),in the Input layer shape is (200 * 3). Why is there such a problem? Error when checking input: expected acc_input to have 3 dimensions, but got array with shape (200, 3).This is a visualization of my model.
Here's my code:
WIDE = 20
FEATURE_DIM = 30
CHANNEL = 1
CONV_NUM = 64
CONV_LEN = 3
CONV_LEN_INTE = 3#4
CONV_LEN_LAST = 3#5
CONV_NUM2 = 64
CONV_MERGE_LEN = 8
CONV_MERGE_LEN2 = 6
CONV_MERGE_LEN3 = 4
rnn_size=128
acc_input_tensor = Input(shape=(200,3),name = 'acc_input')
gyro_input_tensor = Input(shape=(200,3),name= 'gyro_input')
Acc_input_tensor = Reshape(target_shape=(20,30,1))(acc_input_tensor)
Gyro_input_tensor = Reshape(target_shape=(20,30,1))(gyro_input_tensor)
acc_conv1 = Conv2D(CONV_NUM,(1, 1*3*CONV_LEN),strides= (1,1*3),padding='valid',activation=None)(Acc_input_tensor)
acc_conv1 = BatchNormalization(axis=1)(acc_conv1)
acc_conv1 = Activation('relu')(acc_conv1)
acc_conv1 = Dropout(0.2)(acc_conv1)
acc_conv2 = Conv2D(CONV_NUM,(1,CONV_LEN_INTE),strides= (1,1),padding='valid',activation=None)(acc_conv1)
acc_conv2 = BatchNormalization(axis=1)(acc_conv2)
acc_conv2 = Activation('relu')(acc_conv2)
acc_conv2 = Dropout(0.2)(acc_conv2)
acc_conv3 = Conv2D(CONV_NUM,(1,CONV_LEN_LAST),strides=(1,1),padding='valid',activation=None)(acc_conv2)
acc_conv3 = BatchNormalization(axis=1)(acc_conv3)
acc_conv3 = Activation('relu')(acc_conv3)
acc_conv3 = Dropout(0.2)(acc_conv3)
gyro_conv1 = Conv2D(CONV_NUM,(1, 1*3*CONV_LEN),strides=(1,1*3),padding='valid',activation=None)(Gyro_input_tensor)
gyro_conv1 = BatchNormalization(axis=1)(gyro_conv1)
gyro_conv1 = Activation('relu')(gyro_conv1)
gyro_conv1 = Dropout(0.2)(gyro_conv1)
gyro_conv2 = Conv2D(CONV_NUM,(1, CONV_LEN_INTE),strides=(1,1),padding='valid',activation=None)(gyro_conv1)
gyro_conv2 = BatchNormalization(axis=1)(gyro_conv2)
gyro_conv2 = Activation('relu')(gyro_conv2)
gyro_conv2 = Dropout(0.2)(gyro_conv2)
gyro_conv3 = Conv2D(CONV_NUM,(1, CONV_LEN_LAST),strides=(1,1),padding='valid',activation=None)(gyro_conv2)
gyro_conv3 = BatchNormalization(axis=1)(gyro_conv3)
gyro_conv3 = Activation('relu')(gyro_conv3)
gyro_conv3 = Dropout(0.2)(gyro_conv3)
sensor_conv_in = concatenate([acc_conv3, gyro_conv3], 2)
sensor_conv_in = Dropout(0.2)(sensor_conv_in)
sensor_conv1 = Conv2D(CONV_NUM2,kernel_size=(2, CONV_MERGE_LEN),padding='SAME')(sensor_conv_in)
sensor_conv1 = BatchNormalization(axis=1)(sensor_conv1)
sensor_conv1 = Activation('relu')(sensor_conv1)
sensor_conv1 = Dropout(0.2)(sensor_conv1)
sensor_conv2 = Conv2D(CONV_NUM2,kernel_size=(2, CONV_MERGE_LEN2),padding='SAME')(sensor_conv1)
sensor_conv2 = BatchNormalization(axis=1)(sensor_conv2)
sensor_conv2 = Activation('relu')(sensor_conv2)
sensor_conv2 = Dropout(0.2)(sensor_conv2)
sensor_conv3 = Conv2D(CONV_NUM2,kernel_size=(2, CONV_MERGE_LEN3),padding='SAME')(sensor_conv2)
sensor_conv3 = BatchNormalization(axis=1)(sensor_conv3)
sensor_conv3 = Activation('relu')(sensor_conv3)
conv_shape = sensor_conv3.get_shape()
print conv_shape
x1 = Reshape(target_shape=(int(conv_shape[1]), int(conv_shape[2]*conv_shape[3])))(sensor_conv3)
x1 = Dense(64, activation='relu')(x1)
gru_1 = GRU(rnn_size, return_sequences=True, init='he_normal', name='gru1')(x1)
gru_1b = GRU(rnn_size, return_sequences=True, go_backwards=True, init='he_normal', name='gru1_b')(x1)
gru1_merged = merge([gru_1, gru_1b], mode='sum')
gru_2 = GRU(rnn_size, return_sequences=True, init='he_normal', name='gru2')(gru1_merged)
gru_2b = GRU(rnn_size, return_sequences=True, go_backwards=True, init='he_normal', name='gru2_b')(gru1_merged)
x = merge([gru_2, gru_2b], mode='concat')
x = Dropout(0.25)(x)
n_class=2
x = Dense(n_class)(x)
model = Model(input=[acc_input_tensor,gyro_input_tensor], output=x)
model.compile(loss='mean_squared_error',optimizer='adam')
model.fit(inputs=[acc,gyro],outputs=labels,batch_size=1, validation_split=0.2, epochs=2,verbose=1 ,
shuffle=False)
The acc gyro in model.fit is (200 * 3),in the Input layer shape is (200 * 3). Why is there such a problem? Error when checking input: expected acc_input to have 3 dimensions, but got array with shape (200, 3)
Shape (None, 200, 3) is used in Keras for batches, None means batch_size, because in the time of creating or reshaping input arrays, the batch size might be unknown, so if you will be using batch_size = 128 your batch input matrix will have shape (128, 200, 3)

doxygen creates empty data for Tcl code file

I am using the standard version in RH5.5
These are the only tcl file (sample.tcl) contents:
##general procs used for CCK coding
#use set varname rather than return $varname
#\code
##Documented proc DlgLgrep \c
#accepts a list, and an expression. Returns a list of all elements for list,which match expression
#The expression is carried out in the DlgLgrep contect and local namespace. You can, of course use
#upvar and uplevel within your code.
#for example, let us say that you want numbers that are above 3
#puts [ DlgLgrep { $grepy > 3 } { 1 4 0 8 -2 } ]
#will yield
#4 8
proc DlgLgrep { expry listy } {
#accepts a list, and an expression. Returns a list of all elements for list,which match expression
#The expression is carried out in the DlgLgrep contect and local namespace. You can, of course use
#upvar and uplevel within your code.
#for example, let us say that you want numbers that are above 3
#puts [ DlgLgrep { $grepy > 3 } { 1 4 0 8 -2 } ]
#will yield
#4 8
set ret {}
foreach grepy $listy {
if { [ expr $expry ] } {
lappend ret $grepy
}
}
return $ret
}
#\endcode
These are the doxygen file contents:
PROJECT_NAME = CCK_DLG
PROJECT_NUMBER =
OUTPUT_DIRECTORY = ./
CREATE_SUBDIRS = NO
OUTPUT_LANGUAGE = English
USE_WINDOWS_ENCODING = NO
BRIEF_MEMBER_DESC = YES
REPEAT_BRIEF = YES
ABBREVIATE_BRIEF =
ALWAYS_DETAILED_SEC = NO
INLINE_INHERITED_MEMB = NO
FULL_PATH_NAMES = YES
STRIP_FROM_PATH =
STRIP_FROM_INC_PATH =
SHORT_NAMES = NO
JAVADOC_AUTOBRIEF = NO
MULTILINE_CPP_IS_BRIEF = NO
DETAILS_AT_TOP = NO
INHERIT_DOCS = YES
SEPARATE_MEMBER_PAGES = NO
TAB_SIZE = 8
ALIASES =
OPTIMIZE_OUTPUT_FOR_C = NO
OPTIMIZE_OUTPUT_JAVA = NO
BUILTIN_STL_SUPPORT = NO
DISTRIBUTE_GROUP_DOC = NO
SUBGROUPING = YES
EXTRACT_ALL = YES
EXTRACT_PRIVATE = NO
EXTRACT_STATIC = NO
EXTRACT_LOCAL_CLASSES = YES
EXTRACT_LOCAL_METHODS = NO
HIDE_UNDOC_MEMBERS = NO
HIDE_UNDOC_CLASSES = NO
HIDE_FRIEND_COMPOUNDS = NO
HIDE_IN_BODY_DOCS = NO
INTERNAL_DOCS = NO
CASE_SENSE_NAMES = YES
HIDE_SCOPE_NAMES = NO
SHOW_INCLUDE_FILES = YES
INLINE_INFO = YES
SORT_MEMBER_DOCS = YES
SORT_BRIEF_DOCS = NO
SORT_BY_SCOPE_NAME = NO
GENERATE_TODOLIST = YES
GENERATE_TESTLIST = YES
GENERATE_BUGLIST = YES
GENERATE_DEPRECATEDLIST= YES
ENABLED_SECTIONS =
MAX_INITIALIZER_LINES = 30
SHOW_USED_FILES = YES
SHOW_DIRECTORIES = NO
FILE_VERSION_FILTER =
QUIET = NO
WARNINGS = YES
WARN_IF_UNDOCUMENTED = YES
WARN_IF_DOC_ERROR = YES
WARN_NO_PARAMDOC = NO
WARN_FORMAT = "$file:$line: $text"
WARN_LOGFILE =
INPUT = ./
FILE_PATTERNS =
RECURSIVE = NO
EXCLUDE = *.tcl
EXCLUDE_SYMLINKS = NO
EXCLUDE_PATTERNS =
EXAMPLE_PATH =
EXAMPLE_PATTERNS =
EXAMPLE_RECURSIVE = NO
IMAGE_PATH =
INPUT_FILTER =
FILTER_PATTERNS =
FILTER_SOURCE_FILES = NO
SOURCE_BROWSER = NO
INLINE_SOURCES = NO
STRIP_CODE_COMMENTS = YES
REFERENCED_BY_RELATION = YES
REFERENCES_RELATION = YES
REFERENCES_LINK_SOURCE = YES
USE_HTAGS = NO
VERBATIM_HEADERS = YES
ALPHABETICAL_INDEX = NO
COLS_IN_ALPHA_INDEX = 5
IGNORE_PREFIX =
GENERATE_HTML = YES
HTML_OUTPUT = html
HTML_FILE_EXTENSION = .html
HTML_HEADER =
HTML_FOOTER =
HTML_STYLESHEET =
HTML_ALIGN_MEMBERS = YES
GENERATE_HTMLHELP = NO
CHM_FILE =
HHC_LOCATION =
GENERATE_CHI = NO
BINARY_TOC = NO
TOC_EXPAND = NO
DISABLE_INDEX = NO
ENUM_VALUES_PER_LINE = 4
GENERATE_TREEVIEW = NO
TREEVIEW_WIDTH = 250
GENERATE_LATEX = YES
LATEX_OUTPUT = latex
LATEX_CMD_NAME = latex
MAKEINDEX_CMD_NAME = makeindex
COMPACT_LATEX = NO
PAPER_TYPE = a4wide
EXTRA_PACKAGES =
LATEX_HEADER =
PDF_HYPERLINKS = NO
USE_PDFLATEX = NO
LATEX_BATCHMODE = NO
LATEX_HIDE_INDICES = NO
GENERATE_RTF = NO
RTF_OUTPUT = rtf
COMPACT_RTF = NO
RTF_HYPERLINKS = NO
RTF_STYLESHEET_FILE =
RTF_EXTENSIONS_FILE =
GENERATE_MAN = NO
MAN_OUTPUT = man
MAN_EXTENSION = .3
MAN_LINKS = NO
GENERATE_XML = NO
XML_OUTPUT = xml
XML_SCHEMA =
XML_DTD =
XML_PROGRAMLISTING = YES
GENERATE_AUTOGEN_DEF = NO
GENERATE_PERLMOD = NO
PERLMOD_LATEX = NO
PERLMOD_PRETTY = YES
PERLMOD_MAKEVAR_PREFIX =
ENABLE_PREPROCESSING = YES
MACRO_EXPANSION = NO
EXPAND_ONLY_PREDEF = NO
SEARCH_INCLUDES = YES
INCLUDE_PATH =
INCLUDE_FILE_PATTERNS =
PREDEFINED =
EXPAND_AS_DEFINED =
SKIP_FUNCTION_MACROS = YES
TAGFILES =
GENERATE_TAGFILE =
ALLEXTERNALS = NO
EXTERNAL_GROUPS = YES
PERL_PATH = /usr/bin/perl
CLASS_DIAGRAMS = YES
HIDE_UNDOC_RELATIONS = YES
HAVE_DOT = NO
CLASS_GRAPH = YES
COLLABORATION_GRAPH = YES
GROUP_GRAPHS = YES
UML_LOOK = NO
TEMPLATE_RELATIONS = NO
INCLUDE_GRAPH = YES
INCLUDED_BY_GRAPH = YES
CALL_GRAPH = NO
CALLER_GRAPH = NO
GRAPHICAL_HIERARCHY = YES
DIRECTORY_GRAPH = YES
DOT_IMAGE_FORMAT = png
DOT_PATH =
DOTFILE_DIRS =
MAX_DOT_GRAPH_WIDTH = 1024
MAX_DOT_GRAPH_HEIGHT = 1024
MAX_DOT_GRAPH_DEPTH = 0
DOT_TRANSPARENT = NO
DOT_MULTI_TARGETS = NO
GENERATE_LEGEND = YES
DOT_CLEANUP = YES
SEARCHENGINE = NO
this is a small excerpt, of course. I am a newbie, and no-one where I work has done any doxygen work.
Thanks!