I want to plot a vector of length 7 against a a vector also of length 7 with values [697 770 852 941 1209 1336 1477]
I want to display the values [697 770 852 941 1209 1336 1477] along the x-axis at the respective data points.
How to do this in Octave using stem function?
set(gca(),'xtick',[697,770,852,941,1209,1336,1477])
set(gca(),'xticklabel',{'697','770','852','941','1209','1336','1477'})
These two lines of code solved it... :|
Related
I'm attempting to run a density functional theory geometry optimization with the following script:
xyzToIonposOpt water.xyz 15
however, this relatively simple command returns the following error:
error: fun(0): subscripts must be either integers 1 to (2^63)-1 or logicals
error: called from
fminsearch>nmsmax at line 275 column 8
fminsearch at line 165 column 25
.tmp.m at line 43 column 8
here is line 275 of fminsearch.m:
f(1) = dirn * fun (x, varargin{:});
and line 165
[x, exitflag, output] = nmsmax (fun, x0, options, varargin{:});
A similar issue is described here: https://octave.1599824.n4.nabble.com/Issue-with-fminsearch-function-td4693803.html It seems to be an issue with Octave. however, I am not certain how to workaround this issue as I am not sure where the fminsearch function is called.
Thanks for your help!
Have tried some of the online references as wells as unix time form at etc. but none of these seem to work. See the examples below.
running Mysql 5.5.5 in ubuntu. innodb engine.
nothing is custom. This is using a built in datetime function.
Here are some examples with the 6 byte hex string and the decoded message below. We are looking for the decoding algorithm. i.e.how to turn the 6 byte hex string into the correct date/time. The algorithm must work correctly on the examples below. The right most byte seems to indicate difference in seconds correctly for small small differences in time between records. i.e. we show an example with 14 sec difference.
full records,nicely highlighted and formated word doc here.
https://www.dropbox.com/s/zsqy9o2rw1h0e09/mysql%20datetime%20examples%20.docx?dl=0
link to formatted word document with the examples.
contact frank%simrex.com re. reward.
replace % with #
hex strings and decoded date/time pairs are below.
pulled from healthy file running mysql
12 51 72 78 B9 46 ... 2014-10-22 16:53:18
12 51 72 78 B9 54 ... 2014-10-22 16:53:32
12 51 72 78 BA 13 ... 2014-10-22 16:55:23
12 51 72 78 CC 27 ... 2014-10-22 17:01:51
here you go.
select str_to_date(conv(replace('12 51 72 78 CC 27',' ', ''), 16, 10), '%Y%m%d%H%i%s')
I have a trivial RGB file saved as TIFF in Photoshop, 1000 or so pixels wide. The first row consists of 3 pixels all of which are hex 4B red, B0 green, 78 blue, and the rest of the row white.
The strip is LZW-encoded and the initial bytes of the strip are:
80 12 D6 07 80 04 16 0C B4 27 A1 E0 D0 B8 64 36 ... (actually only the first 7 or so bytes are significant to my question.)
In 9-bit segments this is:
100000000 001001011 010110000 001111000 000000000 100000101 100000110 ...
(0x100) (0x4B) (0xB0) (0x78) (0x00) (0x105) (0x106)
From what I understand 256 (0x100) is a reset code, but why is the first extended code after that 261 (0x105) instead of 257? I would expect whatever dictionary entry this points to to be the 4B/B0 pair for the second pixel (which it may well be), but how would the decompression algorithm know to place 4B/B0 at 261 instead of 257? Can someone explain what I'm missing here? Might there be something elsewhere in the .tif file that would indicate this? Thanks very much.
~
Let's see
256 (100h) is Clear
257 (101h) is EOF
in your case, then
4Bh B0h is 258 (102h)
B0h 78h is 259 (103h)
78h 00h is 260 (104h)
00h 00h is 261 (105h)
Looks good to me. LZW can actually encode one character ahead of what's been added to the table.
I'm trying to develop a forecaster for electric consumption. So I want to perform a regression using daily data for an entire year. My dataset has several features. Googling I've found that my problem is a Multiple regression problem (Correct me please if I am mistaken).
What I want to do is train a svm for regression with several independent variables and one dependent variable with n lagged days. Here's a sample of my independent variables, I actually have around 10. (We used PCA to determine which variables had some correlation to our problem)
Day Indep1 Indep2 Indep3
1 1.53 2.33 3.81
2 1.71 2.36 3.76
3 1.83 2.81 3.64
... ... ... ...
363 1.5 2.65 3.25
364 1.46 2.46 3.27
365 1.61 2.72 3.13
And the independendant variable 1 is actually my dependant variable in the future. So for example, with a p=2 (lagged days) I would expect my svm to train with the first 2 time series of all three independant variables.
Indep1 Indep2 Indep3
1.53 2.33 3.81
1.71 2.36 3.76
And the output value of the dependent variable would be "1.83" (Indep variable 1 on time 3).
My main problem is that I don't know how to train properly. What I was doing is just putting all features-p in an array for my "x" variables and for my "y" variables I'm just putting my independent variable on p+1 in case I want to predict next day's power consumption.
Example of training.
x with p = 2 and 3 independent variables y for next day
[1.53, 2.33, 3.81, 1.71, 2.36, 3.76] [1.83]
I tried with x being a two dimensional array but when you combine it for several days it becomes a 3d array and libsvm says it can't be.
Perhaps I should change from libsvm to another tool or maybe it's just that I'm training incorrectly.
Thanks for your help,
Aldo.
Let me answer with the python / numpy notation.
Assume the original time series data matrix with columns (Indep1, Indep2, Indep3, ...) is a numpy array data with shape (n_samples, n_variables). Let's generate it randomly for this example:
>>> import numpy as np
>>> n_samples = 100, n_variables = 5
>>> data = np.random.randn(n_samples, n_variables)
>>> data.shape
(100, 5)
If you want to use a window size of 2 time-steps, then the training set can be built as follows:
>>> targets = data[2:, 0] # shape is (n_samples - 2,)
>>> targets.shape
(98,)
>>> features = np.hstack([data[0:-2, :], data[1:-1, :]]) # shape is (n_samples - 2, n_variables * 2)
>>> features.shape
(98, 10)
Now you have your 2D input array + 1D targes that you can feed to libsvm or scikit-learn.
Edit: it might very well be the case that extracting more time-series oriented features such as moving average, moving min, moving max, moving differences (time based derivatives of the signal) or STFT might help your SVM mode make better predictions.
I've got a set of data that is referenced to NZ TOPO50 locations. I've been trying to work out how to convert them to something useful like WGS84 lat/lon.
I have gone through the documentation at http://www.linz.govt.nz and am still stuck.
An example is "BA32 582206" becomes "36 50 50S 174 46 28E". I have found the NZ-topo-50-map-sheets.xls that has a 5 point mulitpolygon to describe BAS32, but I can not work out how the 58/2 and 20/6 become 174 46 28E and 36 50 50S respectively.
Use the online tool at
http://apps.linz.govt.nz/coordinate-conversion/index.aspx
choose as input coordinates system "New Zealand Transverse Mercator Projection" and as output coordniate system "World Geodetic System 1984"
Your input coordinates should be something like
5413457 North
1528677 East
Than you get your WGS84 Coordinates.