I am using custom item-renderer to render items of my DataGroup.
The problems is whenever the value of data-provider(Arraylist in my case) is changed,
it takes 6-9 seconds to render, which is unacceptable.
When I tried to back-track the problem, I found that whenever there's less amount of data, DataGroup is refreshed immediately and all the items are rendered within a second.
But whenever there is relatively more amount of data, DataGroup takes 5-7 seconds to get refreshed.
This is my data group :
<s:DataGroup id="selectedInstancesView" dataProvider="{relatedInstances}" width="100%" height="100%"
itemRenderer="com.ui.renderers.CustomInstanceRenderer"
clipAndEnableScrolling="true" >
This is the function which re-assigns the value to data-provider of above data group :
private function relatedInstanceRetrieved(event:ResultEvent):void{
trace("related instances retrieved at :: " + new Date());
relatedInstances = new ArrayList(event.result as Array);
trace("related instances populated at :: " + new Date());
}
And This is within my custom-item-renderer class, where I am printing the time on creation-complete of item-renderer :
protected function itemrenderer1_creationCompleteHandler(event:FlexEvent):void
{
trace("getting rendered at :: " + new Date()); }
And this is the result I am getting :
related instances retrieved at :: Wed Apr 20 **20:58:10** GMT+0530 2016
related instances populated at :: Wed Apr 20 **20:58:10** GMT+0530 2016
getting rendered at :: Wed Apr 20 **20:58:15** GMT+0530 2016
getting rendered at :: Wed Apr 20 20:58:15 GMT+0530 2016
getting rendered at :: Wed Apr 20 20:58:15 GMT+0530 2016
...(1000 similar traces with exact same time)
All the item-renderers (around 1000) are created within a second.
But if you see, the time at which
Data-Provider (relatedInstances) is re-populated or re-assigned
and
the time at which the item-renderers start getting created is 5 seconds, which means it is taking 5 seconds to refresh the data, which is not acceptable.
I tried using invalidateDisplayList(); reassigning the item renderer, and invalidateProperties() methods, But none worked.
I am using flex sdk 4.6
Please enlighten me.
Related
I need help with the specific code I will paste below. I am using the Ames Housing data set collected by Dean De Cock.
I am using a Python notebook and editing thru Anaconda's Jupyter Lab 2.1.5.
The code below is supposed to replace all np.nan or "None" values. For some reason,
after repeatedly calling a hand-made function inside a for loop, the columns of the resulting data frame get swapped around.
Note: I am aware I could do this with an "imputer." I plan to select numeric and object type features, impute them separately then put them back together. As a side-note, is there any way I can do that while having the details I output manually using text displayed or otherwise verified?
In the cell in question, the flow is:
Get and assign the number of data points in the data frame df_train.
Get and assign a series that lists the count of null values in df_train. The syntax is sr_null_counts = df_train.isnull().sum().
Create an empty list to which names of features that have 5% of their values equal to null are appended. They will be dropped later,
outside the for loop. I thought at first that this was the problem since the command to drop the columns of df_train in-place
used to be within the for-loop.
Repeatedly call a hand-made function to impute columns with null values not exceeding 5% of the row count for df_train.
I used a function that has a for-loop and nested try-except statements to:
Accept a series and, optionally, the series' name when it was a column in a dataframe. It assigns a copy of the passed series
to a local variable.
In the exact order, (a) try to replace all null (NaN or None) values with the mean of the passed series.
(b) If that fails, try to replace all null values with the median of the series.
(c) If even that fails, replace all null values with the mode of the series.
Return the edited copy of the series with all null values replaced. It should also print out strings that tell me what feature
was modified and what summary statistic was used to replace/impute the missing values.
The final line is to drop all the columns marked as having more than 5% missing values.
Here is the full code:
Splitting the main dataframe into a train and test set.
The full data-set was loaded thru df_housing = pd.read_csv(sep = '\t', filepath_or_buffer = "AmesHousing.tsv").
def make_traintest(df, train_fraction = 0.7, random_state_val = 88):
df = df.copy()
df_train = df.sample(frac = train_fraction, random_state = random_state_val)
bmask_istrain = df.index.isin(df_train.index.values)
df_test = df.loc[ ~bmask_istrain ]
return {
"train":df_train,
"test":df_test
}
dict_traintest = make_traintest(df = df_housing)
df_train = dict_traintest["train"]
df_test = dict_traintest["test"]
Get a List of Columns With Null Values
lst_have_nulls = []
for feature in df_housing.columns.values.tolist():
nullcount = df_housing[feature].isnull().sum()
if nullcount > 0:
lst_have_nulls.append(feature)
print(feature, "\n=====\nNull Count:\t", nullcount, '\n', df_housing[feature].value_counts(dropna = False),'\n*****')
Definition of the hand-made function:
def impute_series(sr_values, feature_name = ''):
sr_out = sr_values.copy()
try:
sr_out.fillna(value = sr_values.mean())
print("Feature", feature_name, "imputed with mean:", sr_values.mean())
except Exception as e:
print("Filling NaN values with mean of feature", feature_name, "caused an error:\n", e)
try:
sr_out.fillna(value = sr_values.median())
print("Feature", feature_name, "imputed with median:", sr_values.median())
except Exception as e:
print("Filling NaN values with median for feature", feature_name, "caused an error:\n", e)
sr_out.fillna(value = sr_values.mode())
print("Feature", feature_name, "imputed with mode:", sr_values.mode())
return sr_out
For-Loop
Getting the count of null values, defining the empty list of columns to drop to allow appending, and repeatedly
doing the following: For every column in lst_have_nulls, check if the column has equal, less or more than 5% missing values.
If more, append the column to lst_drop. Else, call the hand-made imputing function. After the for-loop, drop all columns in
lst_drop, in-place.
Where did I go wrong? In case you need the entire notebook, I have uploaded it to Kaggle. Here is a link.
https://www.kaggle.com/joachimrives/ames-housing-public-problem
Update: Problem Still Exists After Testing Anvar's Answer with Changes
When I tried the code of Anvar Kurmukov, my dataframe column values still got swapped. The change I made was adding int and float to the list of dtypes to check. The changes are inside the for-loop:
if dtype in [np.int64, np.float64, int, float].
It may be a problem with another part of my code in the full notebook. I will need to check where it is by calling df_train.info() cell by cell from the top. I tested the code in the notebook I made public. It is in cell 128. For some reason, after running Anvar's code, the df_train.info() method returned this:
1st Flr SF 2nd Flr SF 3Ssn Porch Alley Bedroom AbvGr Bldg Type Bsmt Cond Bsmt Exposure Bsmt Full Bath Bsmt Half Bath ... Roof Style SalePrice Screen Porch Street TotRms AbvGrd Total Bsmt SF Utilities Wood Deck SF Year Built Year Remod/Add
1222 1223 534453140 70 RL 50.0 4882 Pave NaN IR1 Bnk ... 0 0 0 0 0 NaN NaN NaN 0 87000
1642 1643 527256040 20 RL 81.0 13870 Pave NaN IR1 HLS ... 52 0 0 174 0 NaN NaN NaN 0 455000
1408 1409 905427050 50 RL 66.0 21780 Pave NaN Reg Lvl ... 36 0 0 144 0 NaN NaN NaN 0 185000
1729 1730 528218050 60 RL 65.0 10237 Pave NaN Reg Lvl ... 72 0 0 0 0 NaN NaN NaN 0 178900
1069 1070 528180110 120 RL 58.0 10110 Pave NaN IR1 Lvl ... 48 0 0 0 0 NaN NaN NaN 0 336860
tl;dr instead of try: except you should simply use if and check dtype of the column; you do not need to iterate over columns.
drop_columns = df.columns[df.isna().sum() / df.shape[0] > 0.05]
df.drop(drop_columns, axis=1)
num_columns = []
cat_columns = []
for col, dtype in df.dtypes.iteritems():
if dtype in [np.int64, np.float64]:
num_columns.append(col)
else:
cat_columns.append(col)
df[num_columns] = df[num_columns].fillna(df[num_columns].mean())
df[cat_columns] = df[cat_columns].fillna(df[cat_columns].mode())
Short comment on make_traintest function: I would simply return 2 separate DataFrames instead of a dictionary or use sklearn.model_selection.train_test_split.
upd. You can check for number of NaN values in a column, but it is unnecessary if your only goal is to impute NaNs.
Answer
I discovered the answer as to why my columns were being swapped. They were not actually being swapped. The original problem was that I had not set the "Order" column as the index column. To fix the problem on the notebook in my PC, I simply added the following paramater and value to pd.read_csv: index_col = "Order". That fixed the problem on my local notebook. When I tried it on the Kaggle notebook, however, it did not fix the problem
The version of the Ames Housing data set I first used on the notebook - for some reason - was also the cause for the column swapping.
Anvar's Code is fine. You may test the code I wrote, but to be safe, defer to Anvar's code. Mine is still to be tested.
Testing Done
I modified the Kaggle notebook I linked in my question. I used the data set I was actually working in with my PC. When I did that, the code given by Anvar Kurmukov's answer worked perfectly. I tested my own code and it seems fine, but test both versions before trying. I only reviewed the data sets using head() and manually checked the column inputs. If you want to check the notebook, here it is:
https://www.kaggle.com/joachimrives/ames-housing-public-problem/
To test if the data set was at fault, I created to data frames. One was taken directly from my local file uploaded to Kaggle. The other used the current version of the Ames Iowa Housing data set I had used as input. The columns were properly "aligned" with their expected input. To find the expected column values, I used this source:
http://jse.amstat.org/v19n3/decock/DataDocumentation.txt
Here are the screenshots of the different results I got when I swapped data sets:
With an uploaded copy of my local file:
With the original AmesHousing.csv From Notebook Version 1:
The data set I Used that Caused the Column-swap on the Kaggle Notebook
https://www.kaggle.com/marcopale/housing
I have a Google Spreadsheets I have been using to keep track of my hours worked at my job. I am trying to create a custom function to calculate my total hours for the week. Say I work 6 hours of overtime, but then take Friday off. My regular hours would be 32, and I would have 6 hours or overtime. In the event that I don't work 40 hours, I would like to adjust my total hours by taking from any overtime hours and adding to my regular hours.
I have come up with the following function, but I have not yet been able to make it work. I believe I am running into a problem with data types (the inputs are Durations), but I'm not sure how to resolve it. I am dividing by 24 because that seems to convert the values from Duration to Number, but I still can't get it to return the correct answer.
function calcAdjRegHours(regHours, otHours) {
if(regHours<(40/24));
{
if(otHours>(0/24));
{
if((regHours + otHours)>(40/24));
{
var diff = (40/24) - regHours;
regHours += diff;
return regHours;
} elseif; {
return "regHours + otHours is less than 40";
}
} elseif; {
return "there are no otHours";
}
} elseif; {
return "regHours is greater than 40";
}
}
What am I overlooking, or am I making this overly complicated?
Edit: When I call this function with inputs of 40:00, and 2:00, I get the value:
Sun Dec 31 1899 17:00:00 GMT-0700 (MST)2208988800001.6665.
If I run this function:
function calcAdjRegHours(regHours, otHours) {
return ((regHours*24 + otHours*24)/24);
}
I get: -4418114400000.
If I use "return (regHours + otHours);", I get:
Sun Dec 31 1899 17:00:00 GMT-0700 (MST)Sat Dec 30 1899 03:00:00 GMT-0700 (MST).
Something is going wrong when I try to add the variables. They are formatted as Duration, and from my research I can/need to convert them to do arithmetic. I did that by multiplying the variables by 24, adding, and then dividing by 24 again to get it back to a duration.
I ended up using: =if(F14<(40/24), (F14+G14), if(F14=(40/24), F14, F14-(40/24))). That did the trick, although I would still like to come up with a custom function that would do it a little more nicely.
According to the Bluetooth Advertisement sample, I need to set the CompanyID (UInt16) and the Data (IBuffer, a UInt16 in the sample) to start watching for advertisers.
In the iPhone, I can set the beacon UUID to 4B503F1B-C09C-4AEE-972F-750E9D346784. And reading on the internet, I found Apple's company id is 0x004C, so I tried 0x004C and 0x4C00.
So, this is the code I have so far, but of course, it is not working.
var manufacturerData = new BluetoothLEManufacturerData();
// Then, set the company ID for the manufacturer data. Here we picked an unused value: 0xFFFE
manufacturerData.CompanyId = 0x4C00;
// Finally set the data payload within the manufacturer-specific section
// Here, use a 16-bit UUID: 0x1234 -> {0x34, 0x12} (little-endian)
var writer = new DataWriter();
writer.WriteGuid(Guid.Parse("4B503F1B-C09C-4AEE-972F-750E9D346784"));
// Make sure that the buffer length can fit within an advertisement payload. Otherwise you will get an exception.
manufacturerData.Data = writer.DetachBuffer();
I also tried inverting the bytes in the UUID:
writer.WriteGuid(Guid.Parse("504B1B3F-9CC0-EE4A-2F97-0E75349D8467"));
Not success so far. Am I mixing two completely different technologies?
The most important thing you need to do to detect Beacons on Windows 10 is to use the new BluetoothLeAdvertisementWatcher class.
The code in the question seems focussed on setting up a filter to look for only specific Bluetooth LE advertisements matching a company code and perhaps a UUID contained in the advertisement. While this is one approach, it isn't strictly necessary -- you can simply look for all Bluetooth LE advertisements, then decode them to see if they are beacon advertisements.
I've pasted some code below that shows what I think you want to do. Major caveat: I have not tested this code myself, as I don't have a Windows 10 development environment. If you try it yourself and make corrections, please let me know and I will update my answer.
private BluetoothLEAdvertisementWatcher bluetoothLEAdvertisementWatcher;
public LookForBeacons() {
bluetoothLEAdvertisementWatcher = new BluetoothLEAdvertisementWatcher();
bluetoothLEAdvertisementWatcher.Received += OnAdvertisementReceived;
bluetoothLEAdvertisementWatcher.Start();
}
private async void OnAdvertisementReceived(BluetoothLEAdvertisementWatcher watcher, BluetoothLEAdvertisementReceivedEventArgs eventArgs) {
var manufacturerSections = eventArgs.Advertisement.ManufacturerData;
if (manufacturerSections.Count > 0) {
var manufacturerData = manufacturerSections[0];
var data = new byte[manufacturerData.Data.Length];
using (var reader = DataReader.FromBuffer(manufacturerData.Data)) {
reader.ReadBytes(data);
// If we arrive here we have detected a Bluetooth LE advertisement
// Add code here to decode the the bytes in data and read the beacon identifiers
}
}
}
The next obvious question is how do you decode the bytes of the advertisement? It's pretty easy to search the web and find out the byte sequence of various beacon types, even proprietary ones. For the sake of keeping this answer brief and out of the intellectual property thicket, I'll simply describe how to decode the bytes of an open-source AltBeacon advertisement:
18 01 be ac 2f 23 44 54 cf 6d 4a 0f ad f2 f4 91 1b a9 ff a6 00 01 00 02 c5 00
This is decoded as:
The first two bytes are the company code (0x0118 = Radius Networks)
The next two bytes are the beacon type code (0xacbe = AltBeacon)
The next 16 bytes are the first identifier 2F234454-CF6D-4A0F-ADF2-F4911BA9FFA6
The next 2 bytes are the second identifier 0x0001
The next 2 bytes are the third identifier 0x0002
The following byte is the power calibration value 0xC5 -> -59 dBm
I have a long-term problem. I watch on the web, but I did not find right answer.
When I send data from WebAPI-Angular-Controller to Controller is a problem with formatting date. There are real data:
My TimeZone is UTC + 1
MS SQL:
Column type: DateTime2(3) value: 4.7.2015 20:00:00
The client receives the following formats based on the following criteria:
When I edit on WebApiConfig
config.Formatters.JsonFormatter.SerializerSettings.Converters.Add (new IsoDateTimeConverter DateTimeStyles = {} DateTimeStyles.AdjustToUniversal); The client receives 2015-07-04T03: 00: 00Z and this {{time | date 'HH: mm: ss'}} show wrong time. It is show time + 2 hours -> 22:00:00 -> I tried {{time | date 'HH: mm: ss' 'UTC'}}, bud show this time - 1 hour -> 19:00:00.
When I edit on WebApiConfig
config.Formatters.JsonFormatter.SerializerSettings.DateTimeZoneHandling = DateTimeZoneHandling.Utc;, so it is the same as first example.
When I edit on WebApiConfig
config.Formatters.JsonFormatter.SerializerSettings.DateTimeZoneHandling = DateTimeZoneHandling.Local, so it is the same as first example excepting client receive data; they look like 2015-07-04T22: 00: 00 + 02: 00
When I edit on WebApiConfig
config.Formatters.JsonFormatter.SerializerSettings.DateTimeZoneHandling = DateTimeZoneHandling.Undefined, client receive data; they look like 2015-07-04T20: 00: 00 -> it is look OK, bud problem is elsewhere.
I need this time (20:00:00) - curent time (f.e. 10:00:00) = 10:00:00 diff time, but angular show 11:00:00, why?
there is source from angularController
var d1 = new Date(bq.Done_Time)
var d2 = new Date()
bq.Time_Left = new Date(bq.Done_Time).getTime() - new Date().getTime()
Is the problem on server side or client side? And how can I resolve it?
Thank you very much for your valuable suggestions
I have a very basic time series data set of SOI values at monthly intervals from 1950 to 1995.
date soi
1-Dec-94 2.993
1-Nov-94 1.293
1-Oct-94 -1.006
1-Sep-94 -0.80696
1-Aug-94 -1.406
1-Jul-94 -0.20696
1-Jun-94 -2.006
1-May-94 -0.90696
1-Apr-94 -1.806
1-Mar-94 -2.006
1-Feb-94 -1.306
1-Jan-94 -1.306
1-Dec-93 -1.706
1-Nov-93 -1.506
1-Oct-93 0.29374
1-Sep-93 -0.60696
1-Aug-93 1.2937
That is what it looks like.
I want to create a simple Dimple based line graphic. Here is my code:
<div id="chartContainer">
<script type="text/javascript" src="d3.min.js"></script>
<script src="http://dimplejs.org/dist/dimple.v2.1.2.min.js"></script>
// This loads in dimple and d3 libraries
<script type="text/javascript">
var svg = dimple.newSvg("#chartContainer", 1000, 600);
d3.csv("SOI_data.csv", function (data) {
var myChart = new dimple.chart(svg, data);
myChart.setBounds(60, 30, 800, 550);
var x = myChart.addTimeAxis("x", "date", "%d-%b-%y", "%b-%y");
x.addOrderRule("date");
x.timeInterval = 4;
myChart.addMeasureAxis("y", "soi");
var s = myChart.addSeries(null, dimple.plot.line);
myChart.draw();
});
</script>
</div>
And it executes but with a very odd output. It displays the data from Jan-69 to Dec-94 then has a large gap (with a connecting line between) to Jan-50 and then continues on correctly until Dec-68. All the day is displayed it is just displayed in two halves (with a connecting line). I don't know how to display an image or I would but simply put: The data is being graphed in two chunks out of order for no particular reason. I will include any other information if need be, this is my first stackover flow post, so thanks for any help!
Under the hood dimple is using d3.time.format to parse times. It has to make a choice: is "1-Dec-50", 2050 or 1950? It chooses 2050:
> format = d3.time.format("%d-%b-%y")
> format.parse("1-Dec-50")
Thu Dec 01 2050 00:00:00 GMT-0500 (Eastern Standard Time)
Easiest fix is to modify your source data to include century.