gym package not identifying ten-armed-bandits-v0 env - reinforcement-learning

Environment:
Python: 3.9
OS: Windows 10
When I try to create the ten armed bandits environment using the following code the error is thrown not sure of the reason.
import gym
import gym_armed_bandits
env = gym.make('ten-armed-bandits-v0')
The error:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File D:\00_PythonEnvironments\01_RL\lib\site-packages\gym\envs\registration.py:158, in EnvRegistry.spec(self, path)
157 try:
--> 158 return self.env_specs[id]
159 except KeyError:
160 # Parse the env name and check to see if it matches the non-version
161 # part of a valid env (could also check the exact number here)
KeyError: 'ten-armed-bandits-v0'
During handling of the above exception, another exception occurred:
UnregisteredEnv Traceback (most recent call last)
Input In [6], in <module>
----> 1 env = gym.make('ten-armed-bandits-v0')
File D:\00_PythonEnvironments\01_RL\lib\site-packages\gym\envs\registration.py:235, in make(id, **kwargs)
234 def make(id, **kwargs):
--> 235 return registry.make(id, **kwargs)
File D:\00_PythonEnvironments\01_RL\lib\site-packages\gym\envs\registration.py:128, in EnvRegistry.make(self, path, **kwargs)
126 else:
127 logger.info("Making new env: %s", path)
--> 128 spec = self.spec(path)
129 env = spec.make(**kwargs)
130 return env
File D:\00_PythonEnvironments\01_RL\lib\site-packages\gym\envs\registration.py:203, in EnvRegistry.spec(self, path)
197 raise error.UnregisteredEnv(
198 "Toytext environment {} has been moved out of Gym. Install it via `pip install gym-legacy-toytext` and add `import gym_toytext` before using it.".format(
199 id
200 )
201 )
202 else:
--> 203 raise error.UnregisteredEnv("No registered env with id: {}".format(id))
UnregisteredEnv: No registered env with id: ten-armed-bandits-v0
When I check the environments available, I am able to see it there.
from gym import envs
print(envs.registry.all())
dict_values([EnvSpec(CartPole-v0), EnvSpec(CartPole-v1), EnvSpec(MountainCar-v0), EnvSpec(MountainCarContinuous-v0), EnvSpec(Pendulum-v1), EnvSpec(Acrobot-v1), EnvSpec(LunarLander-v2), EnvSpec(LunarLanderContinuous-v2), EnvSpec(BipedalWalker-v3), EnvSpec(BipedalWalkerHardcore-v3), EnvSpec(CarRacing-v0), EnvSpec(Blackjack-v1), EnvSpec(FrozenLake-v1), EnvSpec(FrozenLake8x8-v1), EnvSpec(CliffWalking-v0), EnvSpec(Taxi-v3), EnvSpec(Reacher-v2), EnvSpec(Pusher-v2), EnvSpec(Thrower-v2), EnvSpec(Striker-v2), EnvSpec(InvertedPendulum-v2), EnvSpec(InvertedDoublePendulum-v2), EnvSpec(HalfCheetah-v2), EnvSpec(HalfCheetah-v3), EnvSpec(Hopper-v2), EnvSpec(Hopper-v3), EnvSpec(Swimmer-v2), EnvSpec(Swimmer-v3), EnvSpec(Walker2d-v2), EnvSpec(Walker2d-v3), EnvSpec(Ant-v2), EnvSpec(Ant-v3), EnvSpec(Humanoid-v2), EnvSpec(Humanoid-v3), EnvSpec(HumanoidStandup-v2), EnvSpec(FetchSlide-v1), EnvSpec(FetchPickAndPlace-v1), EnvSpec(FetchReach-v1), EnvSpec(FetchPush-v1), EnvSpec(HandReach-v0), EnvSpec(HandManipulateBlockRotateZ-v0), EnvSpec(HandManipulateBlockRotateZTouchSensors-v0), EnvSpec(HandManipulateBlockRotateZTouchSensors-v1), EnvSpec(HandManipulateBlockRotateParallel-v0), EnvSpec(HandManipulateBlockRotateParallelTouchSensors-v0), EnvSpec(HandManipulateBlockRotateParallelTouchSensors-v1), EnvSpec(HandManipulateBlockRotateXYZ-v0), EnvSpec(HandManipulateBlockRotateXYZTouchSensors-v0), EnvSpec(HandManipulateBlockRotateXYZTouchSensors-v1), EnvSpec(HandManipulateBlockFull-v0), EnvSpec(HandManipulateBlock-v0), EnvSpec(HandManipulateBlockTouchSensors-v0), EnvSpec(HandManipulateBlockTouchSensors-v1), EnvSpec(HandManipulateEggRotate-v0), EnvSpec(HandManipulateEggRotateTouchSensors-v0), EnvSpec(HandManipulateEggRotateTouchSensors-v1), EnvSpec(HandManipulateEggFull-v0), EnvSpec(HandManipulateEgg-v0), EnvSpec(HandManipulateEggTouchSensors-v0), EnvSpec(HandManipulateEggTouchSensors-v1), EnvSpec(HandManipulatePenRotate-v0), EnvSpec(HandManipulatePenRotateTouchSensors-v0), EnvSpec(HandManipulatePenRotateTouchSensors-v1), EnvSpec(HandManipulatePenFull-v0), EnvSpec(HandManipulatePen-v0), EnvSpec(HandManipulatePenTouchSensors-v0), EnvSpec(HandManipulatePenTouchSensors-v1), EnvSpec(FetchSlideDense-v1), EnvSpec(FetchPickAndPlaceDense-v1), EnvSpec(FetchReachDense-v1), EnvSpec(FetchPushDense-v1), EnvSpec(HandReachDense-v0), EnvSpec(HandManipulateBlockRotateZDense-v0), EnvSpec(HandManipulateBlockRotateZTouchSensorsDense-v0), EnvSpec(HandManipulateBlockRotateZTouchSensorsDense-v1), EnvSpec(HandManipulateBlockRotateParallelDense-v0), EnvSpec(HandManipulateBlockRotateParallelTouchSensorsDense-v0), EnvSpec(HandManipulateBlockRotateParallelTouchSensorsDense-v1), EnvSpec(HandManipulateBlockRotateXYZDense-v0), EnvSpec(HandManipulateBlockRotateXYZTouchSensorsDense-v0), EnvSpec(HandManipulateBlockRotateXYZTouchSensorsDense-v1), EnvSpec(HandManipulateBlockFullDense-v0), EnvSpec(HandManipulateBlockDense-v0), EnvSpec(HandManipulateBlockTouchSensorsDense-v0), EnvSpec(HandManipulateBlockTouchSensorsDense-v1), EnvSpec(HandManipulateEggRotateDense-v0), EnvSpec(HandManipulateEggRotateTouchSensorsDense-v0), EnvSpec(HandManipulateEggRotateTouchSensorsDense-v1), EnvSpec(HandManipulateEggFullDense-v0), EnvSpec(HandManipulateEggDense-v0), EnvSpec(HandManipulateEggTouchSensorsDense-v0), EnvSpec(HandManipulateEggTouchSensorsDense-v1), EnvSpec(HandManipulatePenRotateDense-v0), EnvSpec(HandManipulatePenRotateTouchSensorsDense-v0), EnvSpec(HandManipulatePenRotateTouchSensorsDense-v1), EnvSpec(HandManipulatePenFullDense-v0), EnvSpec(HandManipulatePenDense-v0), EnvSpec(HandManipulatePenTouchSensorsDense-v0), EnvSpec(HandManipulatePenTouchSensorsDense-v1), EnvSpec(CubeCrash-v0), EnvSpec(CubeCrashSparse-v0), EnvSpec(CubeCrashScreenBecomesBlack-v0), EnvSpec(MemorizeDigits-v0), EnvSpec(three-armed-bandits-v0), EnvSpec(five-armed-bandits-v0), EnvSpec(ten-armed-bandits-v0), EnvSpec(MultiarmedBandits-v0)])

It could be a problem with your Python version: k-armed-bandits library was made 4 years ago, when Python 3.9 didn't exist. Besides this, the configuration files in the repo indicates that the Python version is 2.7 (not 3.9).
If you create an environment with Python 2.7 and follow the setup instructions it works correctly on Windows:
git clone gym_armed_bandits
cd gym_armed_bandits
pip install -e .

Related

Load custom package model to get model vocabulary in AllenNLP python interface

I'm trying to get the vocabulary from some publicly-available pre-trained models (that aren't mine) using the python interface of AllenNLP, using self.vocab. However, I'm running into problems trying to load in the model. I'm looking to get the vocabulary from the dygiepp models, using the following code:
from allennlp.models.model import Model
scierc_model = Model.from_archive('https://s3-us-west-2.amazonaws.com/ai2-s2-research/dygiepp/master/scierc.tar.gz')
However, I get the following error:
---------------------------------------------------------------------------
ConfigurationError Traceback (most recent call last)
/tmp/local/63381207/ipykernel_7616/3549263982.py in <module>
----> 1 scierc_model = Model.from_archive('https://s3-us-west-2.amazonaws.com/ai2-s2-research/dygiepp/master/scierc.tar.gz')
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/models/model.py in from_archive(cls, archive_file, vocab)
480 from allennlp.models.archival import load_archive # here to avoid circular imports
481
--> 482 model = load_archive(archive_file).model
483 if vocab:
484 model.vocab.extend_from_vocab(vocab)
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/models/archival.py in load_archive(archive_file, cuda_device, overrides, weights_file)
231 # Instantiate model and dataset readers. Use a duplicate of the config, as it will get consumed.
232 dataset_reader, validation_dataset_reader = _load_dataset_readers(
--> 233 config.duplicate(), serialization_dir
234 )
235 model = _load_model(config.duplicate(), weights_path, serialization_dir, cuda_device)
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/models/archival.py in _load_dataset_readers(config, serialization_dir)
267
268 dataset_reader = DatasetReader.from_params(
--> 269 dataset_reader_params, serialization_dir=serialization_dir
270 )
271 validation_dataset_reader = DatasetReader.from_params(
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/common/from_params.py in from_params(cls, params, constructor_to_call, constructor_to_inspect, **extras)
586 "type",
587 choices=as_registrable.list_available(),
--> 588 default_to_first_choice=default_to_first_choice,
589 )
590 subclass, constructor_name = as_registrable.resolve_class_name(choice)
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/common/params.py in pop_choice(self, key, choices, default_to_first_choice, allow_class_names)
322 """{"model": "my_module.models.MyModel"} to have it imported automatically."""
323 )
--> 324 raise ConfigurationError(message)
325 return value
326
ConfigurationError: dygie not in acceptable choices for dataset_reader.type: ['babi', 'conll2003', 'interleaving', 'multitask', 'multitask_shim', 'sequence_tagging', 'sharded', 'text_classification_json']. You should either use the --include-package flag to make sure the correct module is loaded, or use a fully qualified class name in your config file like {"model": "my_module.models.MyModel"} to have it imported automatically.
The error describes how to fix the error from the command line, but not in the python interface. I additionally tried adding the line import dygie to my code to import the missing package, but that didn't solve the problem.
Wondering if anyone knows how to get around this?
To run this model, you'll need to have the code from this repo: https://github.com/dwadden/dygiepp.
In particular, you need to import the DyGIE dataset reader from here: https://github.com/dwadden/dygiepp/blob/master/dygie/data/dataset_readers/dygie.py#L29

Cannot CSV Load a file in Colab Using tf.compat.v1.keras.utils.get_file

I have mounted my GDrive and have csv file in a folder. I am following the tutorial. However, when I issue the tf.keras.utils.get_file(), I get a ValueError As follows.
data_folder = r"/content/drive/My Drive/NLP/project2/data"
import os
print(os.listdir(data_folder))
It returns:
['crowdsourced_labelled_dataset.csv',
'P2_Testing_Dataset.csv',
'P2_Training_Dataset_old.csv',
'P2_Training_Dataset.csv']
TRAIN_DATA_URL = os.path.join(data_folder, 'P2_Training_Dataset.csv')
train_file_path = tf.compat.v1.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
But this returns:
Downloading data from /content/drive/My Drive/NLP/project2/data/P2_Training_Dataset.csv
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-16-5bd642083471> in <module>()
2 TRAIN_DATA_URL = os.path.join(data_folder, 'P2_Training_Dataset.csv')
3 TEST_DATA_URL = os.path.join(data_folder, 'P2_Testing_Dataset.csv')
----> 4 train_file_path = tf.compat.v1.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
5 test_file_path = tf.compat.v1.keras.utils.get_file("eval.csv", TEST_DATA_URL)
6 frames
/usr/lib/python3.6/urllib/request.py in _parse(self)
382 self.type, rest = splittype(self._full_url)
383 if self.type is None:
--> 384 raise ValueError("unknown url type: %r" % self.full_url)
385 self.host, self.selector = splithost(rest)
386 if self.host:
ValueError: unknown url type: '/content/drive/My Drive/NLP/project2/data/P2_Training_Dataset.csv'
What am I doing wrong please?
As per the docs, this will be the outcome of a call to the function tf.compat.v1.keras.utils.get_file.
tf.keras.utils.get_file(
fname,
origin,
untar=False,
md5_hash=None,
file_hash=None,
cache_subdir='datasets',
hash_algorithm='auto',
extract=False,
archive_format='auto',
cache_dir=None
)
By default the file at the url origin is downloaded to the cache_dir ~/.keras, placed in the cache_subdir datasets, and given the filename fname. The final location of a file example.txt would therefore be ~/.keras/datasets/example.txt.
Returns:
Path to the downloaded file
Since you already have the data in your drive, there's no need to download it again (and IIUC, the function is expecting an accessible URL). Also, there's no need of obtaining the file name from a function call because you already know it.
Assuming the drive is mounted, you can replace your file paths as below:
train_file_path = os.path.join(data_folder, 'P2_Training_Dataset.csv')
test_file_path = os.path.join(data_folder, 'P2_Testing_Dataset.csv')

Stanford NER Tagger and NLTK - not working [OSError: Java command failed ]

Trying to run Stanford NER Taggerand NLTK from a jupyter notebook.
I am continuously getting
OSError: Java command failed
I have already tried the hack at
https://gist.github.com/alvations/e1df0ba227e542955a8a
and thread
Stanford Parser and NLTK
I am using
NLTK==3.3
Ubuntu==16.04LTS
Here is my python code:
Sample_text = "Google, headquartered in Mountain View, unveiled the new Android phone"
sentences = sent_tokenize(Sample_text)
tokenized_sentences = [word_tokenize(sentence) for sentence in sentences]
PATH_TO_GZ = '/home/root/english.all.3class.caseless.distsim.crf.ser.gz'
PATH_TO_JAR = '/home/root/stanford-ner.jar'
sn_3class = StanfordNERTagger(PATH_TO_GZ,
path_to_jar=PATH_TO_JAR,
encoding='utf-8')
annotations = [sn_3class.tag(sent) for sent in tokenized_sentences]
I got these files using following commands:
wget http://nlp.stanford.edu/software/stanford-ner-2015-04-20.zip
wget http://nlp.stanford.edu/software/stanford-postagger-full-2015-04-20.zip
wget http://nlp.stanford.edu/software/stanford-parser-full-2015-04-20.zip
# Extract the zip file.
unzip stanford-ner-2015-04-20.zip
unzip stanford-parser-full-2015-04-20.zip
unzip stanford-postagger-full-2015-04-20.zip
I am getting the following error:
CRFClassifier invoked on Thu May 31 15:56:19 IST 2018 with arguments:
-loadClassifier /home/root/english.all.3class.caseless.distsim.crf.ser.gz -textFile /tmp/tmpMDEpL3 -outputFormat slashTags -tokenizerFactory edu.stanford.nlp.process.WhitespaceTokenizer -tokenizerOptions "tokenizeNLs=false" -encoding utf-8
tokenizerFactory=edu.stanford.nlp.process.WhitespaceTokenizer
Unknown property: |tokenizerFactory|
tokenizerOptions="tokenizeNLs=false"
Unknown property: |tokenizerOptions|
loadClassifier=/home/root/english.all.3class.caseless.distsim.crf.ser.gz
encoding=utf-8
Unknown property: |encoding|
textFile=/tmp/tmpMDEpL3
outputFormat=slashTags
Loading classifier from /home/root/english.all.3class.caseless.distsim.crf.ser.gz ... Error deserializing /home/root/english.all.3class.caseless.distsim.crf.ser.gz
Exception in thread "main" java.lang.RuntimeException: java.lang.ClassCastException: java.util.ArrayList cannot be cast to [Ledu.stanford.nlp.util.Index;
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1380)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1331)
at edu.stanford.nlp.ie.crf.CRFClassifier.main(CRFClassifier.java:2315)
Caused by: java.lang.ClassCastException: java.util.ArrayList cannot be cast to [Ledu.stanford.nlp.util.Index;
at edu.stanford.nlp.ie.crf.CRFClassifier.loadClassifier(CRFClassifier.java:2164)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1249)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1366)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1377)
... 2 more
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-15-5621d0f8177d> in <module>()
----> 1 ne_annot_sent_3c = [sn_3class.tag(sent) for sent in tokenized_sentences]
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/tag/stanford.pyc in tag(self, tokens)
79 def tag(self, tokens):
80 # This function should return list of tuple rather than list of list
---> 81 return sum(self.tag_sents([tokens]), [])
82
83 def tag_sents(self, sentences):
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/tag/stanford.pyc in tag_sents(self, sentences)
102 # Run the tagger and get the output
103 stanpos_output, _stderr = java(cmd, classpath=self._stanford_jar,
--> 104 stdout=PIPE, stderr=PIPE)
105 stanpos_output = stanpos_output.decode(encoding)
106
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/__init__.pyc in java(cmd, classpath, stdin, stdout, stderr, blocking)
134 if p.returncode != 0:
135 print(_decode_stdoutdata(stderr))
--> 136 raise OSError('Java command failed : ' + str(cmd))
137
138 return (stdout, stderr)
OSError: Java command failed : [u'/usr/bin/java', '-mx1000m', '-cp', '/home/root/stanford-ner.jar', 'edu.stanford.nlp.ie.crf.CRFClassifier', '-loadClassifier', '/home/root/english.all.3class.caseless.distsim.crf.ser.gz', '-textFile', '/tmp/tmpMDEpL3', '-outputFormat', 'slashTags', '-tokenizerFactory', 'edu.stanford.nlp.process.WhitespaceTokenizer', '-tokenizerOptions', '"tokenizeNLs=false"', '-encoding', 'utf-8']
Download Stanford Named Entity Recognizer version 3.9.1: see ‘Download’ section from The Stanford NLP website.
Unzip it and move 2 files "ner-tagger.jar" and "english.all.3class.distsim.crf.ser.gz" to your folder
Open jupyter notebook or ipython prompt in your folder path and run the following python code:
import nltk
from nltk.tag.stanford import StanfordNERTagger
sentence = u"Twenty miles east of Reno, Nev., " \
"where packs of wild mustangs roam free through " \
"the parched landscape, Tesla Gigafactory 1 " \
"sprawls near Interstate 80."
jar = './stanford-ner.jar'
model = './english.all.3class.distsim.crf.ser.gz'
ner_tagger = StanfordNERTagger(model, jar, encoding='utf8')
words = nltk.word_tokenize(sentence)
# Run NER tagger on words
print(ner_tagger.tag(words))
I tested this on NLTK==3.3 and Ubuntu==16.0.6LTS

TensorFlow tf.decode_csv() not recognizing end of line character

I am trying to read in a CSV file in TensorFlow.
record_defaults = [[0.0], [0.0]]
data = tf.decode_csv(r"C:\Users\USER.NAME\Desktop\tmp.txt", record_defaults=record_defaults)
sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=True))
sess.run(tf.global_variables_initializer())
print(sess.run(data))
sess.close()
Where tmp.txt is a simple CSV:
1.0,4.0
-.3,1.2
Note that I am running windows and Notepad++ shows that my lines end with '\r\n' (CRLF).
I get the following error when running the above code, which suggests to me that tensorflow isnt recognizing the end of line character:
InvalidArgumentError Traceback (most recent call last)
C:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in
_do_call(self, fn, *args)
1021 try:
-> 1022 return fn(*args)
1023 except errors.OpError as e:
C:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
1003 feed_dict, fetch_list, target_list,
-> 1004 status, run_metadata)
1005
C:\Anaconda3\Lib\contextlib.py in __exit__(self, type, value, traceback)
65 try:
---> 66 next(self.gen)
67 except StopIteration:
C:\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py in raise_exception_on_not_ok_status()
465 compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 466 pywrap_tensorflow.TF_GetCode(status))
467 finally:
InvalidArgumentError: Expect 2 fields but have 1 in record 0
[[Node: DecodeCSV = DecodeCSV[OUT_TYPE=[DT_FLOAT, DT_FLOAT], field_delim=",", _device="/job:localhost/replica:0/task:0/cpu:0"](DecodeCSV/records, DecodeCSV/record_defaults_0, DecodeCSV/record_defaults_1)]]
The error persists even when I change the delimiter to a space or tab.
I've searched across Google/StackOverflow but haven't been able to find a similar error. Any help is appreciated. Thank you!
Convert your file to unix format. I am assuming you are working on windows. Either way, in Notepad++, change the type of file as below:
From the "Edit" menu, select "EOL Conversion" -> "UNIX/OSX Format".
Convert to Unix

How to execute scala swing application?

i am new for scala .and trying to execute swing application.
I am using scala 2.8
I have compiled the program successfully but..
while executing it is showing the error like no such file..
can any 1 please help me out?
i m providing the code i am trying to execute.
Gui.scala
import swing._
object Gui extends SimpleSwingApplication
{
def top=new MainFrame {
title="swing"
val b1=new Button{
text = "ok"
}
}
}
scalac Gui.scala
it compiles successfully and create class file
but when I try
scala Gui
it just replies
No such File
Setup:
D:\src\scala_ex\ex1>dir
Volume in drive D is Data
Volume Serial Number is 5C88-8D6C
Directory of D:\src\scala_ex\ex1
01.12.2010 09:25 <DIR> .
01.12.2010 09:25 <DIR> ..
01.12.2010 09:24 173 gui.scala
1 File(s) 173 bytes
2 Dir(s) 24 575 205 376 bytes free
D:\src\scala_ex\ex1>more gui.scala
import swing._
object Gui extends SimpleSwingApplication {
def top = new MainFrame {
title = "swing"
val b1 = new Button{
text = "ok"
}
}
}
D:\src\scala_ex\ex1>scalac -version
Scala compiler version 2.8.1.final -- Copyright 2002-2010, LAMP/EPFL
Compile:
D:\src\scala_ex\ex1>scalac gui.scala
D:\src\scala_ex\ex1>dir
Volume in drive D is Data
Volume Serial Number is 5C88-8D6C
Directory of D:\src\scala_ex\ex1
01.12.2010 09:26 <DIR> .
01.12.2010 09:26 <DIR> ..
01.12.2010 09:26 485 Gui$$anon$1$$anon$2.class
01.12.2010 09:26 557 Gui$$anon$1.class
01.12.2010 09:26 558 Gui$.class
01.12.2010 09:26 1 467 Gui.class
01.12.2010 09:24 173 gui.scala
5 File(s) 3 240 bytes
2 Dir(s) 24 575 201 280 bytes free
Execute:
D:\src\scala_ex\ex1>scala -cp . Gui
And the applications starts.
This is not a direct cut&paste from the Scala code, as the blank line between object Gui and { causes a compilation error.
Now, if you fix that error and compile this with Scala 2.8, you should get these classes in the local directory:
Gui$$anon$1$$anon$2.class
Gui$$anon$1.class
Gui$.class
Gui.class
If you don't, then either the compilation did not work, or there's something else missing. For example, if you declared a package X at the top (and removed it from the example), then Gui won't be in the local directory, but under a subdirectory X, and you should invoke it by typing scala X.Gui.
Another possibility is that you have some Java environment variable pointing the output directory to someplace else.