Stata doesn't save output file - regression

I'm having difficulties to save the output of my regression. Stata is supposed to save the file as "output.dta" in the defined directory, however the file is not saved in this folder (and also nowhere else on my PC). Here is the final piece of code, where I want it to be saved:
if (`counter'==1) {
save "C:\Users\Milla\Code\output", replace
local counter = `counter' + 1
}
if (`counter'!=1) {
cap append using "C:\Users\Milla\Code\output"
duplicates drop *, force
cap save "C:\Users\Milla\Code\output", replace
}
Does anyone have an idea why could this happen? The code runs well and throws no errors or warnings. But it also doesn't say "output.dta is saved" as it normally does, when one saves anything in Stata.
Thanks ahead and best regards,
Milla

try
save "C:\Users\Milla\Code\output.dta", r

Related

How to export .csv files using a script in Dymola?

I am running a big set of simulations in Dymola using a script, so far, it works well.
However, it remains incomplete because all the results are still in .mat and I have not find a way to automatically save them as .csv.
I found the DataFiles.convertMATtoCSV() function, but it requires me to specify a list of variables to export. I would like it to export all the variables without writing them one by one, is it possible?
In the Dymola Manual, there is a section "Saving all values into a CSV file".
It contains the following example code:
// Define name of trajectory file (fileName) and CVS file
// (CSVfile)
fileName="PID_Controller.mat";
CSVfile="AllVariables.csv";
// Read the size of the trajectories in the result file and
// store in 'n'
n=readTrajectorySize(fileName);
// Read the names of the trajectories
names = readTrajectoryNames(fileName);
// Read the trajectories 'names' (and store in 'traj')
traj=readTrajectory(fileName,names,n);
// transpose traj
traj_transposed=transpose(traj);
// write the .csv file using the package 'DataFiles'
DataFiles.writeCSVmatrix(CSVfile, names, traj_transposed);
This should do what you want. Also it gives room for customization if necessary later...

EOFError: Ran out of input with pickle while the file is not empty

This is a part of my code where I open the file and face this error while the file is not empty. Because I opened it and wrote it hundreds of time during my code and it didn't have this problem. The file size on my disk is 25MB so it's not definitely empty. Here is were I open it:
file_to_read = open("attribute.pickle", "rb")
unpickler = pickle.Unpickler(file_to_read);
attribute = unpickler.load()
file_to_read.close()
The attribute file is updated (appended) during the code and here is where I save it to the file:
geeky_file = open("attribute.pickle", 'wb')
pickle.dump(attribute, geeky_file)
geeky_file.close()
So this openning, appending, and saving has been done in loop hundreds of time without any problem. Now that I wanted to rerun the code, it faced the "EOFError: Ran out of input" error when loading at
attribute = unpickler.load()
I tried reading the post for the same error but all answers say the file might be empty while mine definitely isn't.

Q/KDB+ / CSV upload and WSFULL

Pardon me but I'm a Q novice and couldn't find a solution. The code below appends a four-column CSV file to a KDB+ database. This code worked well but, now that my database is large, it throws a WSFULL error. Perhaps there is a more memory efficient way to write it. Please help:
// FILE_LOADER.q
\c 520 500
if [(count .z.x) < 1;
show `$"usage: q loadcsv.q inputfile destfile
where inputfile and destfile are absolute or relative paths to
the files. Inputfile has the following fields:
DATE, TICKER, FIELD, VALUE. DATE is of type date,
TICKER and FIELD are strings, and VALUE is converted to a float.
Any string VALUEs will show up as nulls.";
exit 1
]
f1: hsym `$.z.x[0]
f2: hsym `$.z.x[1]
columns: `DATE`TICKER`FIELD`VALUE
if [() ~ key f1; show ("Input file '",.z.x[0],"' not found");exit 1]
x: .Q.fsn[{f2 upsert flip columns!("DSSF";",")0:x};f1;4194000000]
show ("loaded ",(string x)," characters into the kdb database")
exit 0
First just from trying this out I assume your input csv file never has a header? If it does you'll need a slight code change so kdb is aware.
You are correct that it's a memory issue so what you can do is just decrease the chunk size. You are reading in 4194000000 bytes at a time right now. Try lowering this in accordance with available memory.
If you are still seeing issues it may be your garbage collection setting. You could force a gc after each read/upsert.
.Q.fsn[{f2 upsert flip columns!("DSSF";",")0:x;**.Q.gc[]**};f1;4194000000]

Python String replacement in a file each time gives different result

I have a JSON file and I want to do some replacements in it. I've made a code, it works but it's wonky.
This is where the replacement gets done.
replacements1 = {builtTelefon:'Isim', builtIlce:'Isim', builtAdres:'Isim', builtIsim:'Isim'}
replacements3 = {builtYesterdayTelefon:'Isim', builtYesterdayIlce:'Isim', builtYesterdayAdres:'Isim', builtYesterdayIsim:'Isim'}
with open('veri3.json', encoding='utf-8') as infile, open('veri2.json', 'w') as outfile:
for line in infile:
for src, target in replacements1.items():
line = line.replace(src, target)
for src, target in replacements3.items():
line = line.replace(src, target)
outfile.write(line)
Here's some examples to what builtAdres and builtYesterdayAdres looks like:
01 Temmuz 2018 Pazar.1
30 Haziran 2018 Cumartesi.1
I run this on my data but it results in many different outputs each time. Please do check the screenshot below because I don't know how else I can tell about it.
This is the very same code and I run the same thing everytime but it results in with different outcomes each time.
Here is the original JSON file:
What it should do is testing entire file against 01 Temmuz 2018 Pazar and if it finds just replaces it with string Isim without touching anything else. On a second run checks if anything is 30 Haziran 2018 Cumartesi and replaces them with string Isim too.
What's causing this?
Example files for re-testing:
pastebin - veri3.json
pastebin - code.py
I think you have just one problem: you're trying to use "Isim" as key name multiple times within the same object, and this will botch the JSON.
The reason why you might be "getting different results" might have to do with the client you're using to display the JSON. I think that if you look at the raw data, the JSON should have been fully altered (I ran your script and it seems to be altered). However, the client will not handle very well the repeated key, and will display all objects as well as it can.
In fact, I'm not sure how you get "Isim.1", "Isim.2" as keys, since you actually use "Isim" for all. The client must be trying to cope with the duplicity there.
Try this code, where I use "Isim.1", "Isim.2" etc.:
replacements1 = {builtTelefon:'Isim.3', builtIlce:'Isim.2', builtAdres:'Isim.1', builtIsim:'Isim'}
replacements3 = {builtYesterdayTelefon:'Isim.3', builtYesterdayIlce:'Isim.2', builtYesterdayAdres:'Isim.1', builtYesterdayIsim:'Isim'}
I think you should be able to have all the keys displayed now.
Oh and PS: to use your code with my locale I had to change line 124 to specify 'utf-8' as encoding for the outfile:
with open('veri3.json', encoding='utf-8') as infile, open('veri2.json', 'w', encoding='utf-8') as outfile:

In Stata, how do I add variable labels from a separate csv file?

I have a set of csv files that are very simple to load into Stata using the -insheet- command. But they have very uninformative variable names. For each of these files, I also have a file of metadata consisting of two columns: the original (uninformative) variable names, and a description of what the variables actually mean. I'd like to use these metadata files to create variable labels, preferably without going through and typing up all the separate label commands or turning the metadata file into a dictionary for each file. It seems like there must be a quick way of loading the metadata file into Stata and looping through it to generate the label commands, but I don't know what it is. Any thoughts?
Ideally each line of the metadata is something like
varname1 "more interesting description"
in which case you can prefix each line with
label var
and then run the file as if it were a do-file using do. See the help for label. That is easy in a decent text editor, as for example searching for the start of each line and replacing it with label var (note the need for the space).
What could bite here includes:
You don't have double quotes " " as delimiters, in which case you need to insert them.
The extra information does not qualify as a variable label because it is more than 80 characters long. See help limits.
There are other ways to do this with Stata. You could write a program to read in the metadata and write out a do-file using file, but if this were my problem I would reach first for my text editor. (Most experienced Stata programmers use something else as well as doedit.)