I've been trying to import & export workday exceptions of MS-Project from/to Excel. Daily, monthly or yearly exceptions are no problem, but weekly exceptions do cause me some trouble:
By recording, I understand that for each combination of several weekdays selected (via the tick-boxes) the property "pjWeekday" (or DaysOfWeek in the recorded code) is a specific Integer (6, 10 & 18 in the examples below).
Example of recorded code:
ActiveProject.BaseCalendars("Copy of Standard").Exceptions.Add Type:=6, Start:="01.01.2022", Finish:="18.01.2022", Name:="TestWeekly1", Period:=1, DaysOfWeek:=6
ActiveProject.BaseCalendars("Copy of Standard").Exceptions.Add Type:=6, Start:="01.02.2022", Finish:="02.02.2022", Name:="TestWeekly2", Period:=1, DaysOfWeek:=10
ActiveProject.BaseCalendars("Copy of Standard").Exceptions.Add Type:=6, Start:="01.03.2022", Finish:="03.03.2022", Name:="TestWeekly3", Period:=1, DaysOfWeek:=18
Is there any way to get a list of all integers for all combinations (or is an algorithm behind it) without recording all different possibilities ?
Greatful for any advice,
Chris
Enumerated constants are documented on the Microsoft site; here's the list for pjWeekDay.
The Exceptions.Add method states the DaysOfWeek parameter is "The days on which the exception occurs. Can be a combination of PjWeekday constants." My testing indicates this is only a partial truth.
It appears that the DaysOfWeek values are not PjWeekday constants but rather powers of 2 that can be aggregated. For example, if the weekly exception is for Mondays, the DaysOfWeek = 2 (2^1) and for Thursdays it would be 16 (2^4). And if the exception is for Mondays and Wednesdays, the value would be 10 (2^1 + 2^3).
The formula is: the sum of 2 ^ (PjWeekday constant - 1) for each weekday in the exception.
Related
I have two fields, both have the size set to double in the table properties. When I subtract one field from the other some of the results are displayed as scientific notation when I click in the cell and others just show regular standard format to decimal places.
The data in both fields was updated with Round([Field01],2) and Round([Filed2],2) so the numbers in the fields should not be any longer than 2 decimal places.
Here's an example:
Field1 = 7.01
Field2 = 7.00
But when I subtract Field1 from Field2 the access display shows 0.01 but when I click on the result it displays, -9.99999999999979E-03. So of course, when I try to filter on all results that have 0.01 the query comes back empty because it thinks the result is -9.99999999999979E-03.
Even stranger is if Field1 = 1.02 and Field2 = 1.00, the result is 0.02 and when I click on the result the display still shows 0.02 and I can filter on all results that equal 0.02.
Why would MS Access treat numbers in the same query differently? Why is it displaying in Scientific Notation and not filtering?
Thanks for any support.
Take this simple code in Access (or even Excel) and run it!
Public Sub TestAdd()
Dim MyNumber As Single
Dim I As Integer
For I = 1 To 10
MyNumber = MyNumber + 1.01
Debug.Print MyNumber
Next I
End Sub
Here is the output of the above:
1.01
2.02
3.03
4.04
5.05
6.06
7.070001
8.080001
9.090001
10.1
You can see that after just 7 additions rounding is occurring!
Note how after JUST 7 simple little additions Access is now spitting out wrong numbers and has rounding errors!
More amazing? The above code runs the SAME in Excel!
Ok, I am sure I have your attention now!
If I recall, the FIRST day and first class in computing science? Computers don't store exact numbers when using floating point numbers.
So, then how is it possible that the WHOLE business community using Excel, or Access, or in fact your desktop calculator not come crashing down?
You mean Access cannot add up 7 simple little numbers without having errors?
How can I even do payroll then?
The basic concept and ALL you need to know here is that computers store real (floating) numbers only as approximate.
And integer values are stored exact.
so, there are several approaches here, and in fact if you writing ANY business software that needs to work with money values? And not suffer rounding errors?
Then you better off to choose what we called some kind of "scaled" integer. Behind the scenes, the computer does NOT use floating numbers, but uses a integer value, and the also has a "decimal" position.
In fact, in a lot of older business BASIC languages, or others? We often had to do the scaling on our own. (so, we would choose a large integer format). In fact, this "scaling" feature still exists in Access!!! (and you see it in the format options).
So, two choices here. If you don't want "tiny" rounding errors, then use "currency" data type. This may, or may not be sufficient for you, since it only allows a max of 4 decimal places. But in most cases, it should suffice. And if you need "more" decimal places, then you can multiply the values by 1000, and then divide by 1000 when done the calculations.
however, try changing the column type to currency and that should work. (this type of data is how your desktop calculator also works - and thus you not see funny rounding errors as a result (in most cases).
but, the FIRST rule of the day? First computer course?
Computers do not store exact numbers for floating point numbers - they are approximations, and are subject to rounding errors. Now, if you really are using double for the table, then I don't think these rounding errors should show up - since you have "so many decimal places" available.
But, I would try using currency data type - it is a scaled integer, or so called packed decimal.
You can ALSO choose to use a packed decimal in Access, and it supports out to 28 digits, and you can set the "scale" (the decimal point location). However, since you can't declare a decimal type in VBA, then I would suggest that in the table (and in VBA code, use currency data types).
If you need more then 4 decimal points, then consider scaling the currency in your code, or perhaps at that point, you consider using a packed decimal type in the table, but values in VBA will have to use the "variant" type, and they will correctly take on the data column setting if used in code and assigned a value from the table(s) in question.
Needless to say, the first day you start dealing with computers, and that first day ANYTHING beyond being a "end user"? Well, this is your first lesson of the day!
"The data in both fields was updated with Round([Field01],2) and Round([Filed2],2) so the numbers in the fields should not be any longer than 2 decimal places." instead of rounding up(which i think is the reason for the scientific notation) you can use number field as data type , then under field size choose double, then under decimal places choose 2.
I have a getdate() field and I want to convert it into 20210211T172650Z this format how do I do it in SSIS expression?
In SSIS, we have data types for strings, numbers and dates. Dates have no format and when it is converted to a string value, you're getting whatever format the localization rules dictate.
If you have a particular format you want, then you need to control that and the only way you can control it, is by using a string data type.
The pattern we're going to use here, for each element,
extract the digit(s)
convert the digits to string
left pad/prepend a leading zero
extract the last 2 characters from our string
When we extract digits, they're numbers and numbers don't have leading zeroes. We convert to string which will allow us to then add the character zero in front of it because we're just concatenating strings. If the number was less than 10, then this prepending of a zero will result in exactly what we want. 9 -> 09 If it was greater than 9, then we have an extraneous value in there. 11 -> 011. We don't care that we went too big because we're then going to take the right 2 most characters making 09 -> 09 and 011 -> 11. This is the shortest logic to making a leading zero string in SSIS.
Using that logic, we're going to create a variable for each element of our formatted string: year, month, day, hour, minute, second.
What's the starting date?
I created a variable called StartDate of type DateTime and hard coded it to a starting point. This is going to allow me to test various conditions. If I used getdate, then I'd either have to adjust my computer's clock to ensure my code works on 2001-01-01 at 01:01:01 as well as 2021-12-31 at 23:59:59. When you're satisfied your code passes all the tests, you can then specify that StartDate property EvaluateAsExpression is True and then use GetDate(). But I wouldn't use GetDate().
GetDate is going to evaluate every time you inspect it. When your package starts, it will show 2021-02-12 # 11:16 AM But your package takes 5 minutes to run, so when you go to re-use the value that is built on GetDate, you will now get 2021-02-12 # 11:21 AM.
In your case, those keys won't match if you send it more than once to your Amazon thing. Instead, use a System scoped variable like #[System::StartTime] That is updated to the time the package starts executing and remains constant for the duration of the SSIS package execution. So when you're satisfied the expression you've build matches the business rules, then change #[User::StartDate] over to use #[System::StartTime]. It provides the updated time but without the challenges of drifting time.
Extract the digit(s)
The SSIS expression language has YEAR, MONTH and DAY defined but no shorthand methods for time components. But, it does have the DATEPART function in which you can ask for any named date part. I'm going to use that for all of my access methods as it makes it nice and consistent.
As an example, this is how I get the Hour. String literal HOUR and we use our variable
DATEPART("HOUR",#[User::StartDate])
Convert the digits to string
The previous step gave us a number but we've got that leading zero problem to solve so convert that to a string
(DT_WSTR, 2)DATEPART("HOUR",#[User::StartDate])
Cast to string, two characters wide max, the number we generated
left pad/prepend a leading zero
String concatenation is the + operator and since we can't concatenate a string to a number, we make sure we have the correct operand types on both sides
"0" + (DT_WSTR, 2)DATEPART("HOUR",#[User::StartDate])
extract the last 2 characters from our string
Since we might have a 2 or 3 character string at this point, we're going to use the RIGHT function to only get the last N characters.
RIGHT("0" + (DT_WSTR, 2)DATEPART("HOUR",#[User::StartDate]), 2)
Final concatenation
Now that we have our happy little variables and we've checked our boundary conditions, the only thing left is to make one last variable, DateAsISO8601 type of string, EvaulateAsExpression = True
#[User::Year] + #[User::Month] +#[User::Day] + "T" +#[User::Hour] +#[User::Minute] +#[User::Second] + "Z"
I am using spss to conduct mixed effect model of the following project:
The participant is being asked some open ended questions and their answers are recorded.
For example, if the participant's answer is related to equality, the variable "equality" is coded as "1". Otherwise, it is coded as "0". Therefore, dependent variable is the variable "equality".
Fixed effects:
- participant's country (Asians vs. Westerners)
- gender (Male vs Female)
- age group (younger age group vs. older age group)
- condition (control group vs. intervention group)
Random effect: Subject ID (participants)
Sample size: over 600 participants
My syntax in spss:
MIXED Equality BY Country Gender AgeGroup Condition
/CRITERIA=CIN(95) MXITER(100) MXSTEP(10) SCORING(1) SINGULAR(0.000000000001) HCONVERGE(0, ABSOLUTE) LCONVERGE(0, ABSOLUTE) PCONVERGE(0.000001, ABSOLUTE)
/FIXED= Country Gender AgeGroup Condition | SSTYPE(3)
/METHOD=ML
/PRINT=SOLUTION TESTCOV
/RANDOM=INTERCEPT | SUBJECT(SubID_R) COVTYPE(VC).
When running this analysis in spss, the following warning appears:
Iteration was terminated but convergence has not been achieved.
The MIXED procedure continues despite this warning. Subsequent results produced are based on the last iteration. Validity of the model fit is uncertain.
I try to increase the number of "MXSTEP" from 10 to 10000 in syntax, but another warning appear:
The final Hessian matrix is not positive definite although all
convergence criteria are satisfied.
The MIXED procedure continues despite this warning. Validity of subsequent results cannot be ascertained.
I also try to increase the number of "MXITER" but the warning remains. May I ask how to deal with this problem to get rid of the warning?
Aside from what you've already tried, in some cases increasing the number of Fisher scoring steps can be helpful, but it may be the case that your random intercept variance is truly redundant and you won't be able to resolve this problem with those data and that model.
Also, typically you would not use a linear model for a binary response variable, but would use something like a logistic model (this can be done in GENLINMIXED, under Analyze>Mixed Models>Generalized Linear in the menus).
I am pondering on how it is best to develop a JUnit test for a function that calculates a number of points and values in time based on a number of inputs. The purpose of the method is to calculate a series of points in time given a series of gradient value pairs, i.e.
Gradient 1 to Value 1, Gradient 2 to Value 2, Gradient 3 to Value 3, and so on...
Given a starting point in time and starting value, the function calculates the points in time each Value is reached (in the gradient value pairs) up until a target value is reached. This is essentially to plot a line on a graph with x-axis having date values and the y-axing having numeric values.
The method to test takes the following inputs:
StartTime (Date)
StartValue (Double)
TargetValue (Double)
GradientValuePairs (ArrayList)
EnsurePointEvery5Minutes (Boolean)
Where GradientValuePair is like:
class GradientValuePair {
Double gradient; // Gradient up to Target
Double target;
...
}
The output from this method is essentially ArrayList - a profile - with:
class DatePoint {
Date date;
Double value;
...
}
The EnsurePointEvery5Minuntes parameter basically adds a date point every 5 minutes for the calcualted profile which is then returned by the method.
To ensure the test has worked I will need to check each date and value is to what is expected by either:
Iterating through the array with an array of what is expected.
Store minute/second offsets from the StartTime with the expected value in some sort of structure.
Now the difficult part for me is deciding on how to write the TestCase. I want to test a broad/diverse range of inputs so that:
StartTime will cover 30 minutes i.e. in range of 2012-03-08 00:00 to 2012-03-08 00:30.
StartValue will be in the range of 0 to 1000.
TargetValue will be in the range of StartValue to 1000.
GradientValuePairs will require around 10 different arrays to be tested.
EnsurePointEvery5Minutes will be tested with both true and false.
Now given the number of different input sets will be something like:
30 * (0 to 1000 * 0 to 1000 = 500500) * 10 * 2 = 300,300,000 different test input sets per GradientValuePairs input
Or call us crazy for wanting to do this. Maybe the tests are too diverse for this instance.
I am wondering if anybody has any advice for testing such scenarios like this. I can't think of any other way to do this than implement my own algorithm for calculating the output before each call to the method I am testing - then who is to say that the algorithm I implement to test it is correct.
If I understand correctly. you are proposing that you test every possible set of combination of numeric inputs. That is almost never required of unit tests, as it would be essentially equivalent to testing whether the Java math library works for all numbers for all operations. Generally what you will do is try to identify edge conditions and write tests for those. These would include things like 0's. negatives, numeric overflow, and combinations of inputs which have intermediate computations that result in the same things. Then of course, you would want to test a handful of normal vanilla cases of data as well that are not edge cases.
So short answer: no you should not need to test 300M+ input sets.
The number 71867806 represents the present day, with the smallest unit of days.
Sorry guy's, caching owned me, it's actually milliseconds!
How can I
calculate the currente date from it?
(or) convert it into an Unix timestamp?
Solution shouldn't use language depending features.
Thanks!
This depends on:
What unit this number represents (days, seconds, milliseconds, ticks?)
When the starting date was
In general I would discourage you from trying to reinvent the wheel here, since you will have to handle every single exception in regards to dates yourself.
If it's truly an integer number of days, and the number you've given is for today (April 21, 2010, for me as I'm reading this), then the "zero day" (the epoch) was obviously enough 71867806 days ago. I can't quite imagine why somebody would pick that though -- it works out to roughly 196,763 years ago (~194,753 BC, if you prefer). That seems like a strange enough time to pick that I'm going to guess that there's more to this than what you've told us (perhaps more than you know about).
It seems to me the first thing to do is verify that the number does increase by one every 24 hours. If at all possible keep track of the exact time when it does increment.
First, you have only one point, and that's not quite enough. Get the number for "tomorrow" and see if that's 71867806+1. If it is, then you can safely bet that +1 means +1 day. If it's something like tomorrow-today = 24, then odds are +1 means +1 hour, and the logic to display days only shows you the "day" part. If it's something else check to see if it's near (24*60, which would be minutes), (24*60*60, which would be seconds), or (24*60*60*1000, which would be milliseconds).
Once you have an idea of what kind of units you are using, you can estimate how many years ago the "start" date of 0 was. See if that aligns with any of the common calendar systems located at http://en.wikipedia.org/wiki/List_of_calendars. Odds are that the calendar you are using isn't a truly new creation, but a reimplementation of an existing calendar. If it seems very far back, it might be an Julian Date, which has day 0 equivalent to BCE 4713 January 01 12:00:00.0 UT Monday. Julian Dates and Modified Julian dates are often used in astronomy calculations.
The next major goal is to find Jan 1, 1970 00:00:00. If you can find the number that represents that date, then you simply subtract it from this foreign calendar system and convert the remainder from the discovered units to milliseconds. That will give you UNIX time which you can then use with the standard UNIX utilities to convert to a time in any time zone you like.
In the end, you might not be able to be 100% certain that your conversion is exactly the same as the hand implemented system, but if you can test your assumptions about the calendar by plugging in numbers and seeing if they display as you predicted. Use this technique to create a battery of tests which will help you determine how this system handles leap years, etc. Remember, it might not handle them at all!
What time is: 71,867,806 miliseconds from midnight?
There are:
- 86,400,000 ms/day
- 3,600,000 ms/hour
- 60,000 ms/minute
- 1,000 ms/second
Remove and tally these units until you have the time, as follows:
How many days? None because 71,867,806 is less than 86,400,000
How many hours? Maximum times 3,600,000 can be removed is 19 times
71,867,806 - (3,600,000 * 19) = 3,467,806 ms left.
How many minutes? Maximum times 60,000 can be removed is 57 times.
3,467,806 - (60,000 * 57) = 47,806 ms left
How many seconds? Maximum times 1,000 can be removed is 47 times.
47,806 - (1,000 * 47) = 806
So the time is: 19:57:47.806
It is indeed a fairly long time ago if the smallest number is in days. However, assuming you're sure about it I could suggest the following shell command which would be obviously not valid for dates before 1st Jan. 1970:
date -d "#$(echo '(71867806-71853086)*3600*24'|bc)" +%D
or without bc:
date -d "#$(((71867806 - 71853086) * 3600 * 24))" +%D
Sorry again for the messy question, i got the solution now. In js it looks like that:
var dayZero = new Date(new Date().getTime() - 71867806 * 1000);