How do I access objects for which I don't have the string reference when
using SWIG TCL wrappers?
Basically in my program some of the objects are predefined even before
loading the the TCL shell. If writing the wrappers myself I would pass a
pointer to a object which in turn has the pointers to all the objects
created thus far. How can I achieve the same behavior through SWIG?
The simplest method would be to add static methods to the class (or some other wrapped class) that return these special instances. SWIG will then wrap the access correctly, and you'll be able to use the static method calling convention to get handles to those instances.
set foo [YourClass_specialFoo] ;# Get the special instance once
$foo bar ... ;# invoke methods on it
Related
Is there any way to get a function pointer from a &str type in rust which is provided by the use.
Example: use provides a name of a function provided by the user i need some way to call the function preferably a closure or a function pointer
Rust is a static, compiled language. There is no access to functions by name at runtime. The functions may not even exist (they may be inlined, for example, or optimized away). Instead, what you need to do is create a HashMap of Strings to functions you would like to call at runtime, or use a match to dispatch by string.
Is there a way to geth autocompletion with IntelliSense in Visual Studio Code work properly, when I read a json file and cast it to a PowerShell like:
$config = Get-Content "SOME_PATH" | ConvertFrom-Json
$config.attribute1
The problem is that the file needs to be in memory before it can get the structure off the json file and propose attributes.
If I get the code out and execute it in the powershell terminal and then go back to the code editor, the autocompletion works fine.
Currently IntelliSense doesn't do this.
Some of the reasons are because when you are coding, often times the specified file could not exist, be a different test version than expected, the file could be huge, malformed, etc. By default, it is sometimes safer to just not do this.
By running it in the terminal and loading it in memory, you are explicitly telling IntelliSense what you are using, and then it now "knows" about the object, and can then properly suggest the correct properties and attributes.
As #mklement0 suggests, using the keyboard shortcut F8 will conveniently execute the current line/selection in the integrated terminal, which would load the object into memory, and allow you to use IntelliSense in the editor.
To complement HAL9256's helpful answer:
First, some background information; find a working solution in the bottom section.
IntelliSense for variables in Visual Studio Code works if their type is either:
explicitly declared (e.g., [datetime] $var = ...)
or can be inferred from an assigned value.
If an assignment is based on a command (cmdlet, function, script) call, the type can only be inferred from commands with explicitly defined output type(s):
many, but by no means all, cmdlets do declare their output types
functions and scripts must use the [OutputType(<type>)] attribute.
Additionally, with the nondescript [pscustomobject] type returned by ConvertFrom-Json - which has no inherent properties, only those you add on demand; a "property bag" - you only get IntelliSense:
if a variable was assigned from a custom-object literal (e.g., $var = [pscustomobject] #{ one = 1; two = 2 })
if you cast a custom object to a specific type, assuming instances of that type can be constructed from the custom object's properties, something that PowerShell makes easy - see this answer.
Solution with a custom class (PSv5+)
Visual Studio Code's IntelliSense (via the PowerShell Extension) does recognize the members of instances of PS custom classes defined via the PSv5+ class statement.
You can therefore use a custom class to mirror the structure of the JSON object you're loading and convert the [pscustomobject] "property bags" returned by ConvertFrom-Json to an instance of that class by way of a cast.
Note: The inherent limitation of this approach is that your class must anticipate all property names that the underlying JSON objects contain, and the two must be kept in sync; otherwise:
if a property name changes on the JSON side, your code will break if the class definition isn't updated accordingly.
if new properties are added on the JSON side, these will be inaccessible unless the class definition is updated accordingly.
class definitions can either be:
directly embedded in a script
imported from modules via the using module statement (note that using Import-Module does not load a module's classes).
To implement the solution at hand, you can use a class definition in one of two ways:
(a) Define a class directly in your script that matches the structure of the JSON objects, and cast the [pscustomobject] instance returned from ConvertFrom-Json to that type; a variable assigned this way supports IntelliSense.
(b) Wrap the JSON-loading functionality in a module that performs the above inside the module, and passes the class instance out from a function that declares its [OutputObject()] to be of that type; code that imports that module with using module will then get IntelliSense for variables that capture the output from that function.
A simple demonstration of (a):
# Define a class whose properties mirror the underlying JSON.
class Config {
$foo
$bar
}
# Load the JSON and cast the resulting [pscustomobject] to the class.
# Note: This cast only works if the JSON object's set of properties
# is either the same as that of the [Config] type or a subset of it.
[Config] $config = '{ "foo": "bar", "bar": 42 }' | ConvertFrom-Json
# Variable $config supports IntelliSense, because its is now known
# as type Config.
$config. # shows list of properties
I'm trying to use AutoBean on the server and client to send and receive json data through AppEngines channel API. I don't want to store this data in the datastore. I already have a Proxy for this object that I use for the RequestFactoryServlet (which underneath just uses AutoBean anyways), so this should be doable. Instead of writing up a new Proxy for the object that exactly duplicates the Proxy for the RequestFactoryServlet, I'd like to just use the proxy that I use for the RequestFactoryServlet. The only problem is that I get an error while compiling that comes from my AutoBeanFactory.
Invoking generator
com.google.web.bindery.autobean.gwt.rebind.AutoBeanFactoryGenerator
[ERROR] The com.wmba.wmbaapp.shared.ObjectProxy parameterization is not simple, but the obj method does not provide a
delegate
So I'm not really sure what to do here. It seems like before I added the client side in, it's able to serialize the object into JSON just fine, but for some reason it doesn't like this. It sounds like it wants a delegate from me, but I can't find anything on this from the internet.
Anyone have any ideas?
Note: I also tried the same thing with EntityProxy (which is the base of the RequestFactory framework from what I read on the AutoBean page, but I get the same error).
The issue is that EntityProxy defines the stableId method which is not a getter (name doesn't start with get). That makes it a not simple bean, for which AutoBeans require a real bean instance to be wrapped in the created AutoBean (the delegate, passed as an argument of the type of the AutoBean –ObjectProxy in your case– to your obj method of the AutoBeanFactory).
In other words, AutoBeans expects your obj method to be of the form:
AutoBean<ObjectProxy> obj(ObjectProxy toWrap);
The simplest solution is to not try to reuse the entity proxy with AutoBeans.
You might be able to make it work though by annotating your AutoBeanFactory with:
#Category(EntityProxyCategory.class)
You might have to add #NoWrap(EntityProxyId.class) too, see http://code.google.com/p/google-web-toolkit/source/browse/trunk/user/src/com/google/web/bindery/requestfactory/vm/InProcessRequestFactory.java
It turned out for me that I had a property setter that had an empty parameter list in my Ojbect interface. It didn't have anything to do with the factory, except for the interface the factory was trying to create a proxy for:
interface Factory {
AutoBeans<MyObject> createObject();
}
interface MyObject {
String getProperty();
void setProperty();
}
A bone-headed mistake but held me up with this precise compiler error. Adding the Category annotation as mentioned in the previous answer identified the faulty property setter.
Could someone explain to me why in GWT you cannot convert a client/shared pojo (that implements Serializable) into a JSON object without jumping through a load of hoops like using the AutoBeanFactory (e.g GWT (Client) = How to convert Object to JSON and send to Server? ) or creating javascript overlay objects (and so extends JavaScriptObject)
GWT compiles your client objects into a javascript object, so why can't it then simply convert your javascript to JSON if you ask it to?
The GWT JSON library supplied only allows you to JSONify java objects that extend JavaScriptObject
I am obviously misunderstanding something about GWT since a GWT compiles a simple java POJO into a javascript object and in javascript you can JSON.stringify it into JSON so why not in GWT?
GWT compiles your app, it doesn't just convert it. It does take advantage of the prototype object in JavaScript to build classes as it needs, usually following your class hierarchy (and any GWT classes you use), but it makes many other changes:
Optimizations:
Tightens up types - if you refer to something as List, but it can only be an ArrayList, it rewrites the type declarations. This by itself doesnt give much, but it lets other steps do better work, such as
Making methods static - if nothing ever overrides ArrayList.add, for example, this will turn any calls it can prove are to ArrayList.add into a static call, preventing the need for dynamic dispatch, and allowing the 'this' string in the final JS to be replaces with a shorter arg name. This will prevent a JS object from having a method you expect it to have.
Inline Methods - if a method is simple enough, and is called in few enough places, the compiler might remove the method entirely, since it knows all places where it is called. This will directly affect your use case.
Removes/Inlines unreferenced fields - if you read to a field but only write it once, it will assume that the original value is a constant. If you don't read it, there is no reason to assign it. Values that the compiler can't tell will ever be used don't need to be using up space in the js and time in the browser. This also will directly affect treating gwt'd Java as JS.
After these, among others, the compiler will rename fields, arguments, and types to be as small as possible - rarely will a field or argument be longer than 1 character when this is complete, since those are most frequently used and have the smallest scope, so can be reused the most often by the compiler. This too will affect trying to treat objects as JSON.
The libraries that allow you to export GWT objects as JSON do so by making some other assumption.
JavaScriptObject (JSO) isn't a real Java object, but actually represents a JavaScript instance, so you can cast back and forth at will - the JSNI you write will emerge relatively unoptimized, as the compiler can't tell if you are trying to talk to an external library.
AutoBeans are generated to assume that they should have the ability to write out JSON, so specific methods to encode objects are written in. They will be subject to the same rules as the other Java that is compiled - code that isn't used may be removed, code that is only called one way might be tightened up or inlined.
Libraries that can export JS compile in Java details into the final executable, making it bigger, but giving you the ability to treat these Java objects like JS in some limited way.
One last point, since you are talking both about JSON and Javascript - Some normal JS isn't suitable for writing out as JSON. Date objects don't have a consistent way to serialize that is recognized by JSON. Non-tree object graphs can't be serialized:
var obj = {};
obj.prop = {};
obj.prop.obj = obj;
Autobeans come with a built in checker for these circular references, and I would hope the JSO serialization does as well.
in C#.net, Can i pass a DataContext object (created by LINQ to SQL) as a parameter of a method in another class ?
You can if the project that the class is defined in references the project the dataContext object is created in.
Those DataContext objects are auto-generated however and aren't necessary a good dependency to add to your other "concrete" classes.
I'd just translate the data in the dataContext into a concrete business object and pass that it into your class as a parameter to your method.
The DataContext is just reference to your model interface. Passing it as a reference is useless since you can instantiate it whenever you want and this approach would even confuse other developers.
"First of all, passing a ref variable is used to be able to change variable that holds the reference. But since you are not changing the DataContext dbase reference in your GetPByRole method, passing it as a ref is useless and would even confuse other developers. Perhaps you misunderstand value types and reference types. Reference types (such as DataContext) are always passed by reference, passing it around through method calls will not make new copies of the object itself, merely copies of the reference (which is a 32 or 64 bits value)."
Check this answer: https://stackoverflow.com/a/5248024/5878893