My app uses a Dictionary
protected _categoryToValueDict:Dictionary = new Dictionary();
to map something to something else.
Now, at a certain point in the application, I need to remove a certain key from the Dictionary.
I implemented this simple method:
public function setCategoryNoValue( cat:TAModelCategory ):void {
// delete( _categoryToValueDict[ cat ] );
var old:Dictionary = _categoryToValueDict;
_categoryToValueDict = new Dictionary();
for ( var key:* in old ) {
if ( key != cat ) {
_categoryToValueDict[ key ] = old[ key ];
}
}
}
If I only use [description of the delete operator]
delete( _categoryToValueDict[ cat ] );
the app itself doesn't throw errors in normal mode. But as soon as I serialize its external data structure to an external source [currently SharedObject], the app isn't able to de-serialize it later on.
If I use the above coded manual iterative removal operation, the de-serialize operation works as expected and the model appears in the app.
The alternative should be identical. Shouldn't they?
Thus, my question: What's the difference between the two alternatives?
PS: This question might be related to my previous one.
UPDATE-1
Adobe explains on this page:
To make the object referenced by myObject eligible for garbage collection, you must remove all references to it. In this case, you must change the value of myObject and delete the myObject key from myMap, as shown in the following code:
myObject = null;
delete myMap[myObject];
Is suppose this to be a typo. Shouldn't it read like this:
delete myMap[myObject];
myObject = null;
Why pass a null-pointer to myMap as key?
Okay, I just spent a good two hours or so looking into this, which is way more than I planning on spending looking at this. But I was intrigued.
I think you may have uncovered a legitimate bug in ActionScript's AMF encoding (or in how the Dictionary class gets seralized via AMF). The bug effects anything that uses AMF, so the exact same bug is reproduceable with a ByteArray, so I'm going to use that for demonstration purposes.
Consider the following code:
var d:Dictionary = new Dictionary(false);
d["goodbye"] = "world";
d["hello"] = "world";
delete d["hello"]
var ba:ByteArray = new ByteArray();
ba.writeObject(d);
var len:uint = ba.position;
ba.position = 0;
for(var i:uint=0;i<len;i++) {
trace(ba.readUnsignedByte().toString(16));
}
The output will be:
11 05 00 06 0f 67 6f 6f 64 62 79 65 06 0b 77 6f 72 6c 64
Now what if we don't ever put the "hello" in as a key:
var d:Dictionary = new Dictionary(false);
d["goodbye"] = "world";
var ba:ByteArray = new ByteArray();
ba.writeObject(d);
var len:uint = ba.position;
ba.position = 0;
for(var i:uint=0;i<len;i++) {
trace(ba.readUnsignedByte().toString(16));
}
The output then is:
11 03 00 06 0f 67 6f 6f 64 62 79 65 06 0b 77 6f 72 6c 64
Notice that the length is exactly the same, however they differ in the second byte.
Now lets look at the serialization for if I don't delete "hello":
11 05 01 06 0b 68 65 6c 6c 6f 06 0b 77 6f 72 6c 64 06 0f 67 6f 6f 64 62 79 65 06 02
Notice that 05 in the second byte is the same as when we deleted it. I think this is specifying the number of items in the Dictionary. I say "I think" because I dug through the documentation on AMF0/3 for quite a while trying to figure out exactly whats going on here, because it doesn't seem like this should be the serialization for a Dictionary, but its fairly consistent, but I don't get it.
So I think that's why you are hitting an exception (specifically the "End of file" error), because its still thinks there should be another item in the dictionary that it should be de-serializing.
Your alternate method works because you are constructing a new Dictionary and populating it... Its "internal counter" is only ever increasing, so it works like a charm.
Another thing to note, that if you set d["Hello"] = undefined, it does not throw an exception, but the item does not get removed from the dictionary. The key gets serialized with a value of undefined in the AMF stream. So the resulting byte-stream is longer than if it was never there.
Using an Object doesn't seem to exhibit this same behavior. Not only doesn't not produce an error, the generated bytecode is more in line with the AMF0/3 documentation I could find from Adobe. And the resulting "key" is literally dropped from the serialization, like it was in fact never there. So I'm not sure what special case they are using for Dictionary (apparently the undocumented AMF3 datatype 0x11), but it does not play right with deleting items out of it.
It seems like a legit bug to me.
edit
So I dug around a bit more and found other people talking about AMF serilization of a Dictionary.
0x11 : Dictionary Data Type
0x05 : Bit code: XXXX XXXY
: If y == 0 then X is a reference to a previously encoded object in the stream
: If y == 1 then X is the number of key/val pairs in the dictionary.
So if this case 5&1 == 1 and 5>>1 == 2, so it's expecting two key/val pairs in the "bad" serialized version.
Correct syntax for the delete operator is like this:
delete _categoryToValueDict[ cat ];
Although using parenthesis seems to compile fine, it's not the correct way.
Related
I need help with the specific code I will paste below. I am using the Ames Housing data set collected by Dean De Cock.
I am using a Python notebook and editing thru Anaconda's Jupyter Lab 2.1.5.
The code below is supposed to replace all np.nan or "None" values. For some reason,
after repeatedly calling a hand-made function inside a for loop, the columns of the resulting data frame get swapped around.
Note: I am aware I could do this with an "imputer." I plan to select numeric and object type features, impute them separately then put them back together. As a side-note, is there any way I can do that while having the details I output manually using text displayed or otherwise verified?
In the cell in question, the flow is:
Get and assign the number of data points in the data frame df_train.
Get and assign a series that lists the count of null values in df_train. The syntax is sr_null_counts = df_train.isnull().sum().
Create an empty list to which names of features that have 5% of their values equal to null are appended. They will be dropped later,
outside the for loop. I thought at first that this was the problem since the command to drop the columns of df_train in-place
used to be within the for-loop.
Repeatedly call a hand-made function to impute columns with null values not exceeding 5% of the row count for df_train.
I used a function that has a for-loop and nested try-except statements to:
Accept a series and, optionally, the series' name when it was a column in a dataframe. It assigns a copy of the passed series
to a local variable.
In the exact order, (a) try to replace all null (NaN or None) values with the mean of the passed series.
(b) If that fails, try to replace all null values with the median of the series.
(c) If even that fails, replace all null values with the mode of the series.
Return the edited copy of the series with all null values replaced. It should also print out strings that tell me what feature
was modified and what summary statistic was used to replace/impute the missing values.
The final line is to drop all the columns marked as having more than 5% missing values.
Here is the full code:
Splitting the main dataframe into a train and test set.
The full data-set was loaded thru df_housing = pd.read_csv(sep = '\t', filepath_or_buffer = "AmesHousing.tsv").
def make_traintest(df, train_fraction = 0.7, random_state_val = 88):
df = df.copy()
df_train = df.sample(frac = train_fraction, random_state = random_state_val)
bmask_istrain = df.index.isin(df_train.index.values)
df_test = df.loc[ ~bmask_istrain ]
return {
"train":df_train,
"test":df_test
}
dict_traintest = make_traintest(df = df_housing)
df_train = dict_traintest["train"]
df_test = dict_traintest["test"]
Get a List of Columns With Null Values
lst_have_nulls = []
for feature in df_housing.columns.values.tolist():
nullcount = df_housing[feature].isnull().sum()
if nullcount > 0:
lst_have_nulls.append(feature)
print(feature, "\n=====\nNull Count:\t", nullcount, '\n', df_housing[feature].value_counts(dropna = False),'\n*****')
Definition of the hand-made function:
def impute_series(sr_values, feature_name = ''):
sr_out = sr_values.copy()
try:
sr_out.fillna(value = sr_values.mean())
print("Feature", feature_name, "imputed with mean:", sr_values.mean())
except Exception as e:
print("Filling NaN values with mean of feature", feature_name, "caused an error:\n", e)
try:
sr_out.fillna(value = sr_values.median())
print("Feature", feature_name, "imputed with median:", sr_values.median())
except Exception as e:
print("Filling NaN values with median for feature", feature_name, "caused an error:\n", e)
sr_out.fillna(value = sr_values.mode())
print("Feature", feature_name, "imputed with mode:", sr_values.mode())
return sr_out
For-Loop
Getting the count of null values, defining the empty list of columns to drop to allow appending, and repeatedly
doing the following: For every column in lst_have_nulls, check if the column has equal, less or more than 5% missing values.
If more, append the column to lst_drop. Else, call the hand-made imputing function. After the for-loop, drop all columns in
lst_drop, in-place.
Where did I go wrong? In case you need the entire notebook, I have uploaded it to Kaggle. Here is a link.
https://www.kaggle.com/joachimrives/ames-housing-public-problem
Update: Problem Still Exists After Testing Anvar's Answer with Changes
When I tried the code of Anvar Kurmukov, my dataframe column values still got swapped. The change I made was adding int and float to the list of dtypes to check. The changes are inside the for-loop:
if dtype in [np.int64, np.float64, int, float].
It may be a problem with another part of my code in the full notebook. I will need to check where it is by calling df_train.info() cell by cell from the top. I tested the code in the notebook I made public. It is in cell 128. For some reason, after running Anvar's code, the df_train.info() method returned this:
1st Flr SF 2nd Flr SF 3Ssn Porch Alley Bedroom AbvGr Bldg Type Bsmt Cond Bsmt Exposure Bsmt Full Bath Bsmt Half Bath ... Roof Style SalePrice Screen Porch Street TotRms AbvGrd Total Bsmt SF Utilities Wood Deck SF Year Built Year Remod/Add
1222 1223 534453140 70 RL 50.0 4882 Pave NaN IR1 Bnk ... 0 0 0 0 0 NaN NaN NaN 0 87000
1642 1643 527256040 20 RL 81.0 13870 Pave NaN IR1 HLS ... 52 0 0 174 0 NaN NaN NaN 0 455000
1408 1409 905427050 50 RL 66.0 21780 Pave NaN Reg Lvl ... 36 0 0 144 0 NaN NaN NaN 0 185000
1729 1730 528218050 60 RL 65.0 10237 Pave NaN Reg Lvl ... 72 0 0 0 0 NaN NaN NaN 0 178900
1069 1070 528180110 120 RL 58.0 10110 Pave NaN IR1 Lvl ... 48 0 0 0 0 NaN NaN NaN 0 336860
tl;dr instead of try: except you should simply use if and check dtype of the column; you do not need to iterate over columns.
drop_columns = df.columns[df.isna().sum() / df.shape[0] > 0.05]
df.drop(drop_columns, axis=1)
num_columns = []
cat_columns = []
for col, dtype in df.dtypes.iteritems():
if dtype in [np.int64, np.float64]:
num_columns.append(col)
else:
cat_columns.append(col)
df[num_columns] = df[num_columns].fillna(df[num_columns].mean())
df[cat_columns] = df[cat_columns].fillna(df[cat_columns].mode())
Short comment on make_traintest function: I would simply return 2 separate DataFrames instead of a dictionary or use sklearn.model_selection.train_test_split.
upd. You can check for number of NaN values in a column, but it is unnecessary if your only goal is to impute NaNs.
Answer
I discovered the answer as to why my columns were being swapped. They were not actually being swapped. The original problem was that I had not set the "Order" column as the index column. To fix the problem on the notebook in my PC, I simply added the following paramater and value to pd.read_csv: index_col = "Order". That fixed the problem on my local notebook. When I tried it on the Kaggle notebook, however, it did not fix the problem
The version of the Ames Housing data set I first used on the notebook - for some reason - was also the cause for the column swapping.
Anvar's Code is fine. You may test the code I wrote, but to be safe, defer to Anvar's code. Mine is still to be tested.
Testing Done
I modified the Kaggle notebook I linked in my question. I used the data set I was actually working in with my PC. When I did that, the code given by Anvar Kurmukov's answer worked perfectly. I tested my own code and it seems fine, but test both versions before trying. I only reviewed the data sets using head() and manually checked the column inputs. If you want to check the notebook, here it is:
https://www.kaggle.com/joachimrives/ames-housing-public-problem/
To test if the data set was at fault, I created to data frames. One was taken directly from my local file uploaded to Kaggle. The other used the current version of the Ames Iowa Housing data set I had used as input. The columns were properly "aligned" with their expected input. To find the expected column values, I used this source:
http://jse.amstat.org/v19n3/decock/DataDocumentation.txt
Here are the screenshots of the different results I got when I swapped data sets:
With an uploaded copy of my local file:
With the original AmesHousing.csv From Notebook Version 1:
The data set I Used that Caused the Column-swap on the Kaggle Notebook
https://www.kaggle.com/marcopale/housing
According to the Bluetooth Advertisement sample, I need to set the CompanyID (UInt16) and the Data (IBuffer, a UInt16 in the sample) to start watching for advertisers.
In the iPhone, I can set the beacon UUID to 4B503F1B-C09C-4AEE-972F-750E9D346784. And reading on the internet, I found Apple's company id is 0x004C, so I tried 0x004C and 0x4C00.
So, this is the code I have so far, but of course, it is not working.
var manufacturerData = new BluetoothLEManufacturerData();
// Then, set the company ID for the manufacturer data. Here we picked an unused value: 0xFFFE
manufacturerData.CompanyId = 0x4C00;
// Finally set the data payload within the manufacturer-specific section
// Here, use a 16-bit UUID: 0x1234 -> {0x34, 0x12} (little-endian)
var writer = new DataWriter();
writer.WriteGuid(Guid.Parse("4B503F1B-C09C-4AEE-972F-750E9D346784"));
// Make sure that the buffer length can fit within an advertisement payload. Otherwise you will get an exception.
manufacturerData.Data = writer.DetachBuffer();
I also tried inverting the bytes in the UUID:
writer.WriteGuid(Guid.Parse("504B1B3F-9CC0-EE4A-2F97-0E75349D8467"));
Not success so far. Am I mixing two completely different technologies?
The most important thing you need to do to detect Beacons on Windows 10 is to use the new BluetoothLeAdvertisementWatcher class.
The code in the question seems focussed on setting up a filter to look for only specific Bluetooth LE advertisements matching a company code and perhaps a UUID contained in the advertisement. While this is one approach, it isn't strictly necessary -- you can simply look for all Bluetooth LE advertisements, then decode them to see if they are beacon advertisements.
I've pasted some code below that shows what I think you want to do. Major caveat: I have not tested this code myself, as I don't have a Windows 10 development environment. If you try it yourself and make corrections, please let me know and I will update my answer.
private BluetoothLEAdvertisementWatcher bluetoothLEAdvertisementWatcher;
public LookForBeacons() {
bluetoothLEAdvertisementWatcher = new BluetoothLEAdvertisementWatcher();
bluetoothLEAdvertisementWatcher.Received += OnAdvertisementReceived;
bluetoothLEAdvertisementWatcher.Start();
}
private async void OnAdvertisementReceived(BluetoothLEAdvertisementWatcher watcher, BluetoothLEAdvertisementReceivedEventArgs eventArgs) {
var manufacturerSections = eventArgs.Advertisement.ManufacturerData;
if (manufacturerSections.Count > 0) {
var manufacturerData = manufacturerSections[0];
var data = new byte[manufacturerData.Data.Length];
using (var reader = DataReader.FromBuffer(manufacturerData.Data)) {
reader.ReadBytes(data);
// If we arrive here we have detected a Bluetooth LE advertisement
// Add code here to decode the the bytes in data and read the beacon identifiers
}
}
}
The next obvious question is how do you decode the bytes of the advertisement? It's pretty easy to search the web and find out the byte sequence of various beacon types, even proprietary ones. For the sake of keeping this answer brief and out of the intellectual property thicket, I'll simply describe how to decode the bytes of an open-source AltBeacon advertisement:
18 01 be ac 2f 23 44 54 cf 6d 4a 0f ad f2 f4 91 1b a9 ff a6 00 01 00 02 c5 00
This is decoded as:
The first two bytes are the company code (0x0118 = Radius Networks)
The next two bytes are the beacon type code (0xacbe = AltBeacon)
The next 16 bytes are the first identifier 2F234454-CF6D-4A0F-ADF2-F4911BA9FFA6
The next 2 bytes are the second identifier 0x0001
The next 2 bytes are the third identifier 0x0002
The following byte is the power calibration value 0xC5 -> -59 dBm
I use this url for fetching map tile from google server
http://mts0.google.com/vt/lyrs=m#189000000&hl=en&src=app&x=41189&y=25680&z=16&s=Gal
I wonder if there is a way to customize this url, by adding some extra parameters to fetch tile without any label of streets or extra info or overlays.
something just like customizing map in map api v3.
any suggestion would be welcomed.
I didn't find a documentation about it, but there is a parameter apistyle
the value(must be urlencoded) to hide street-labels would be
s.t:3|s.e:l|p.v:off
The following is a guess because of a missing documentation:
s.t defines the feature-type, the value 3 seems to be road
s.e defines the element e.g. labels or geometry
p defines the style, v stands for visibility , the value off should be clear.
result:
https://mts0.google.com/vt/lyrs=m#289000001&hl=en&src=app&x=41189&y=25680&z=16&s=Gal&apistyle=s.t%3A3|s.e%3Al|p.v%3Aoff
You'll have to play around with the parameters to get the desired result. In the past it was possible to get the style e.g. by inspecting the tile-URL with developer-tools when using for example the Styled Map Wizard , but they have modified the tile-URLs used by the javascript-API , the parameters now will be encoded somehow.
A list of parameters and values:
FeatureTypes:s.t
all 0
administrative 1
administrative.country 17
administrative.land_parcel 21
administrative.locality 19
administrative.neighborhood 20
administrative.province 18
landscape 5
landscape.man_made 81
landscape.natural 82
poi 2
poi.attraction 37
poi.business 33
poi.government 34
poi.medical 36
poi.park 40
poi.place_of_worship 38
poi.school 35
poi.sports_complex 39
road 3
road.arterial 50
road.highway 49
road.local 51
transit 4
transit.line 65
transit.station 66
water 6
ElementType: s.e
geometry g
geometry.fill g.f
geometry.stroke g.s
labels l
labels.icon l.i
labels.text l.t
labels.text.fill l.t.f
labels.text.stroke l.t.s
Styler:
color p.c
RGBA hex-value #aarrggbb
gamma p.g
float between 0.01 and 10
hue p.h
RGB hex-value #rrggbb
invert_lightness p.il
true/false
lightness p.l
float between -100 and 100
saturation p.s
float between -100 and 100
visibility p.v
on/simplified/off
weight p.w
integer >=0
Implementation of what Dr. Molle found out:
function getEncodedStyles(styles){
var ret = "";
var styleparse_types = {"all":"0","administrative":"1","administrative.country":"17","administrative.land_parcel":"21","administrative.locality":"19","administrative.neighborhood":"20","administrative.province":"18","landscape":"5","landscape.man_made":"81","landscape.natural":"82","poi":"2","poi.attraction":"37","poi.business":"33","poi.government":"34","poi.medical":"36","poi.park":"40","poi.place_of_worship":"38","poi.school":"35","poi.sports_complex":"39","road":"3","road.arterial":"50","road.highway":"49","road.local":"51","transit":"4","transit.line":"65","transit.station":"66","water":"6"};
var styleparse_elements = {"all":"a","geometry":"g","geometry.fill":"g.f","geometry.stroke":"g.s","labels":"l","labels.icon":"l.i","labels.text":"l.t","labels.text.fill":"l.t.f","labels.text.stroke":"l.t.s"};
var styleparse_stylers = {"color":"p.c","gamma":"p.g","hue":"p.h","invert_lightness":"p.il","lightness":"p.l","saturation":"p.s","visibility":"p.v","weight":"p.w"};
for(i=0;i<styles.length;i++){
if(styles[i].featureType){
ret += "s.t:"+styleparse_types[styles[i].featureType]+"|";
}
if(styles[i].elementType){
if(!styleparse_elements[styles[i].elementType])
console.log("style element transcription unkown:"+styles[i].elementType);
ret += "s.e:"+styleparse_elements[styles[i].elementType]+"|";
}
if(styles[i].stylers){
for(u=0;u<styles[i].stylers.length;u++){
var keys = [];
var cstyler = styles[i].stylers[u]
for(var k in cstyler){
if(k=="color"){
if(cstyler[k].length==7)
cstyler[k] = "#ff"+cstyler[k].slice(1);
else if(cstyler[k].length!=9)
console.log("malformed color:"+cstyler[k]);
}
ret += styleparse_stylers[k]+":"+cstyler[k]+"|";
}
}
}
ret = ret.slice(0,ret.length-1);
ret += ","
}
return encodeURIComponent(ret.slice(0,ret.length-1));
}
Input is in this case a regular google maps styles array - A good wizard for that is Snazzy Maps
Anyways, thanks to Dr. Molle for saving hours!
AS3
I some what understand perameters but for some reason I just cant fully understand it
say, dog(bark:string, bone:uint, grass:Array)
dog.bark - string, dog.bone - uint, dog.grass - Array. Right? but then this i dont understand
public function MenuButtonMain(labl:String) - in code below. theres no other classes with labl in it
its the last class. I somewhat understand but if you could give me a how,why,every possibility and everything you can do with it, as technical as it can whatever itll be a huge help. THANKYOU
12 public function MenuButtonMain(labl:String) {
13 _btnLabel = new TextFieldQ;
14 JrtnLabel.autoSize = TextFieldAutoSize.CENTER;
15 JrtnLabel.textColor = OxFFFFFF;
16 JrtnLabel.text = labl;
17 _btnLabel.mouseEnabled = false;
18 addChild(_btnLabel);
19
20 buttonMode = true;
21 useHandCursor = true;
22 addEventListener(MouseEvent.CLICK, onClick,
23 false, 0, true);
There's no difference in the behaviour of the parameter between both example "codes" you provided.
Each of your functions has a name. You use that to call or execute the function.
Each function has a list of parameters that you have to pass to the function when executing it.
Each parameter has a name to identify it and a type that defines what kind of parameter it is.
Hint: As3 is case sensitive. Compare:
bark:string
with:
labl:String
The parameters have different names and different types.
If you want to know what the code in the body of the function does, you should ask for that.
Your question is way too broad, "how,why,every possibility and everything".
Parameter is a variable scoped to function.
var someAnswer = getAnswer(7);
trace(someAnswer); // 12
var someAnswer = getAnswer(4);
trace(someAnswer); // 9
public function getAnswer(someNumber:int):int {
var tempAnswer:int = 5 + someNumber;
return tempAnswer;
}
59 for (i=0; i < count; i++) //count = number of children
60 {
61 if (localXML.children()[i].Name.toString != firstName ¬
&& localXML.children()[i].Surname.toString != surName ¬
&& localXML.children()[i].Company.toString != companyName)
62 {
63 tempXML.appendChild(localXML.children()[i]);
64 }
65 trace("tempXML: --> "+tempXML);
66 localXML = tempXML; <---- WRONG PLACE!!!
67 }
Hello all. I'm getting an error #1010 at line 61.
I did test each value individually and everyone is traced normally. The errors are:
TypeError: Error #1010: at ... frame9:61
The script is allways appending localXML.children()[0] and none else.
I can't see any error there. Any idea?
Thanks in advance.
SOLVED:
59 for (i=0; i < count; i++) //count = number of children
60 {
61 if (localXML.children()[i].Name != firstName ¬
&& localXML.children()[i].Surname != surName ¬
&& localXML.children()[i].Company != companyName)
62 {
63 tempXML.appendChild(localXML.children()[i]);
64 }
65 }
66 trace("tempXML: --> "+tempXML);
67 localXML = tempXML; <---- MOVED HERE!!!
I was updating localXML in with every loop!!! Shame!!!
Check the XML. Either localXML.children()[i] is null or Name does not exist as a child node on the object.
Also remember that if Name is an attribute in the XML, then you need to access it differently.
If the Name is setup like this:
<node>
<Name>Stuff</Name>
</node>
Then you access it as you have done so already. But if it is an attribute like so:
<node Name="stuff"></node>
Then you need to access it like this:
localXML.children()[i].#Name
Another possible issue is the children() call. I've never used it before so I do not know specifically how it behaves. If the above issues do not fix it, try rewriting the parser to skip the children() call and just parse it like you normally would with nested loops.
In the end, though Error #1010 means a term is undefined and doesn't exist so you just need to figure out why it doesn't exist.