SecurityManager deprecation and reflection with suppressAccessChecks - java-module

I'm an university lecturer and I'm revising my lecture on Java reflection.
Other years when teaching about the horrors of suppressAccessChecks I was showing
that you could set up a SecurityManager and do something like
if ("suppressAccessChecks".equals(p.getName())){
StackTraceElement[] st = Thread.currentThread().getStackTrace();
if(.. st ..) { throw new SecurityException(); }
}
In this way you can allow for whitelisted deserializers only to call suppressAccessChecks.
However, now they are deprecating the SecurityManager.
I think the new module system is supposed to help here, but I'm failing to find resources explaining how to support the idea of the whitelisted deserializers above.
Any hint?

With Java modules, setAccessible is already restricted, even without a security manager:
This method may be used by a caller in class C to enable access to a member of declaring class D if any of the following hold:
C and D are in the same module.
The member is public and D is public in a package that the module containing D exports to at least the module containing C.
The member is protected static, D is public in a package that the module containing D exports to at least the module containing C, and C is a subclass of D.
D is in a package that the module containing D opens to at least the module containing C. All packages in unnamed and open modules are open to all modules and so this method always succeeds when D is in an unnamed or open module.
If we assume a typical scenario of a module M using a persistence service in module P and the members are not accessible anyway, only the last bullet applies; M must open the package(s) to P to enable the access override.
This can be done via a qualified opens directive
module M {
opens aPackage.needing.persistence to P;
}
This way, only the explicitly specified module(s), i.e. P, can use setAccessible for members of types in aPackage.needing.persistence.
In case of the HotSpot JVM, there’s the option --add-opens allowing to add qualified opens relationships at startup, in addition to declared ones, but there is no option for a module of an already running application to create such a relationship at runtime to gain additional access rights itself (unless the security is already subverted). It’s imaginable that other environments do not even support such a startup option.
It’s worth mentioning that there are still some new restrictions which can’t be circumvented this way. As also mentioned in setAccessible’s documentation:
This method cannot be used to enable write access to a non-modifiable final field. The following fields are non-modifiable:
static final fields declared in any class or interface
final fields declared in a hidden class
final fields declared in a record
See also this answer
In other words, in case of a record type, the persistence service still must use the constructor to deserialize an instance, even when suppressing access checks.

Related

How to use an external variable in linkage section in COBOL and pass values from it into a new module and write into my new output file

Could someone please tell me why a variable is declared as "External" in a module and how to use that in other modules through Linkage section and how to pass them into new fields so I can write it to a new file.
EXTERNAL items are commonly found in WORKING-STORAGE. These are normally not passed from one program to another via CALL and LINKAGE but shared directly via the COBOL runtime.
Declaring an item as EXTERNAL behaves like "runtime named global storage", you assign a name and a length to a global piece of memory and can access it anywhere in the same runtime unit (no direct CALL needed), even in cases like the following:
MAIN
-> CALL B
B: somevar EXTERNAL
-> MOVE 'TEST' TO somevar
-> CANCEL B
-> CALL C
C: somevar EXTERNAL -> now contains 'TEST'
On an IBM Z mainframe, running z/OS, the runtime routines for all High Level Languages (HLLs) is called Language Environment (LE). Decades ago, each HLL had its own runtime and this caused some problems when they were mixed into the same run unit; starting in the early 1990s IBM switched all HLLs to LE for their runtime.
LE has the concept of an enclave. Part of the text at that link says an enclave is the equivalent of a run unit in COBOL.
Your question is tagged CICS, and sometimes behavior is different when running in that environment. Quoting from that link...
Under CICS the execution of a CICS LINK command creates what Language Environment calls a Child Enclave. A new environment is initialized and the child enclave gets its runtime options. These runtime options are independent of those options that existed in the creating enclave.
[...]
Something similar happens when a CICS XCTL command is executed. In this case we do not get a child enclave, but the existing enclave is terminated and then reinitialized with the runtime options determined for the new program. The same performance considerations apply.
So, as #SimonSobich noted, if you use CALLs to invoke your subroutines when running in CICS, EXTERNAL data is global to the run unit. But, if you use EXEC CICS XCTL to invoke your subroutines, you may see different behavior and have to design your application differently.

Checkstyle check for duplicate classes

The project I am on is having horrible problems with class collisions in the classpath and developers reusing class names. For example, we have 16, yes 16 interfaces called Constants in this bloody thing and its causing all kinds of problems.
I want to implement a checkstyle check that will search for various forms of class duplication. here's the class
import java.io.File;
import java.util.List;
import com.puppycrawl.tools.checkstyle.api.AbstractFileSetCheck;
import com.wps.codetools.common.classpath.ClassScanner;
import com.wps.codetools.common.classpath.criteria.ClassNameCriteria;
import com.wps.codetools.common.classpath.locator.ClasspathClassLocator;
/**
* This codestyle check is designed to scan the project for duplicate class names
* This is being done because it is common that if a class name matches a class
* name that is in a library, the two can be confused. Its in my best practice that
* the class names should be unique to the project.
*
*
*/
public class DuplicateClassNames extends AbstractFileSetCheck {
private int fileCount;
#Override
public void beginProcessing(String aCharset) {
super.beginProcessing(aCharset);
// reset the file count
this.fileCount = 0;
}
#Override
public void processFiltered(File file, List<String> aLines) {
this.fileCount++;
System.out.println(file.getPath());
ClassScanner scanner = new ClassScanner();
scanner.addClassCriteria(new ClassNameCriteria(file.getPath()));
scanner.addClassLocater(new ClasspathClassLocator());
List<Class<?>> classes = scanner.findClasses();
if (classes.size() > 0) {
// log the message
log(0, "wps.duplicate.class.name", classes.size(), classes);
// you can call log() multiple times to flag multiple
// errors in the same file
}
}
}
Ok, so the ClassScanner opens up the classpath of the current JVM and searches it with various criteria. This particular one is a class name. It can go into the source folders, and most importantly it can go into the libraries contained in the classpath and search the *.class files within the jar using ASM. If it finds copies based on the criteria objects that are presented, it returns an array of the files. This still needs some massaging before mainstream but im on a time budget here so quick and dirty it goes.
My problem is understanding the input parameters for the check itself. I copied from the example, but it looks like CheckStyles is giving me a basic IO file object for the source file itself, and the contents of the source file in a string array.
Do I have to run this array thru another processor before I can get the fully qualified class name?
This is more difficult to do right than one might think, mostly because Java supports all kinds of nesting, like static classes defined within an interface, anonymous inner classes, and so on. Also, you are extending AbstractFileSetCheck, which is not a TreeWalker module, so you don't get an AST. If you want an AST, extend Check instead.
Since "quick and dirty" is an option for you, you could simply deduce the class name from the file name: Determine the canonical path, remove common directories from the beginning of the String, replace slashes with dots, cut off the file extension, and you are more or less there. (Without supporting inner classes etc. of course.)
A better solution might be to extend Check and register for PACKAGE_DEF, CLASS_DEF, ANNOTATION_DEF, ENUM_DEF, and INTERFACE_DEF. In your check, you maintain a stack of IDENTs found at these locations, which gives you all fully qualified class names in the .java file. (If you want anonymous classes, too, also register for LITERAL_NEW. I believe in your case you don't want those.)
The latter solution would not work well in an IDE like Eclipse, because the Check lifecycle is too short, and you would keep losing the list of fully qualified class names. It will work in a continuous integration system or other form of external run, though. It is important that the static reference to the class list that you're maintaining is retained between check runs. If you need Eclipse support, you would have to add something to your Eclipse plugin that can keep the list (and also the list from previous full builds, persisted somewhere).

MEF: "Unable to load one or more of the requested types. Retrieve the LoaderExceptions for more information"

Scenario: I am using Managed Extensibility Framework to load plugins (exports) at runtime based on an interface contract defined in a separate dll. In my Visual Studio solution, I have 3 different projects: The host application, a class library (defining the interface - "IPlugin") and another class library implementing the interface (the export - "MyPlugin.dll").
The host looks for exports in its own root directory, so during testing, I build the whole solution and copy Plugin.dll from the Plugin class library bin/release folder to the host's debug directory so that the host's DirectoryCatalog will find it and be able to add it to the CompositionContainer. Plugin.dll is not automatically copied after each rebuild, so I do that manually each time I've made changes to the contract/implementation.
However, a couple of times I've run the host application without having copied (an updated) Plugin.dll first, and it has thrown an exception during composition:
Unable to load one or more of the requested types. Retrieve the LoaderExceptions for more information
This is of course due to the fact that the Plugin.dll it's trying to import from implements a different version of IPlugin, where the property/method signatures don't match. Although it's easy to avoid this in a controlled and monitored environment, by simply avoiding (duh) obsolete IPlugin implementations in the plugin folder, I cannot rely on such assumptions in the production environment, where legacy plugins could be encountered.
The problem is that this exception effectively botches the whole Compose action and no exports are imported. I would have preferred that the mismatching IPlugin implementations are simply ignored, so that other exports in the catalog(s), implementing the correct version of IPlugin, are still imported.
Is there a way to accomplish this? I'm thinking either of several potential options:
There is a flag to set on the CompositionContainer ("ignore failing imports") prior to or when calling Compose
There is a similar flag to specify on the <ImportMany()> attribute
There is a way to "hook" on to the iteration process underlying Compose(), and be able to deal with each (failed) import individually
Using strong name signing to somehow only look for imports implementing the current version of IPlugin
Ideas?
I have also run into a similar problem.
If you are sure that you want to ignore such "bad" assemblies, then the solution is to call AssemblyCatalog.Parts.ToArray() right after creating each assembly catalog. This will trigger the ReflectionTypeLoadException which you mention. You then have a chance to catch the exception and ignore the bad assembly.
When you have created AssemblyCatalog objects for all the "good" assemblies, you can aggregate them in an AggregateCatalog and pass that to the CompositionContainer constructor.
This issue can be caused by several factors (any exceptions on the loaded assemblies), like the exception says, look at the ExceptionLoader to (hopefully) get some idea
Another problem/solution that I found, is when using DirectoryCatalog, if you don't specify the second parameter "searchPattern", MEF will load ALL the dlls in that folder (including third party), and start looking for export types, that can also cause this issue, a solution is to have a convention name on all the assemblies that export types, and specify that in the DirectoryCatalog constructor, I use *_Plugin.dll, that way MEF will only load assemblies that contain exported types
In my case MEF was loading a NHibernate dll and throwing some assembly version error on the LoaderException (this error can happen with any of the dlls in the directory), this approach solved the problem
Here is an example of above mentioned methods:
var di = new DirectoryInfo(Server.MapPath("../../bin/"));
if (!di.Exists) throw new Exception("Folder not exists: " + di.FullName);
var dlls = di.GetFileSystemInfos("*.dll");
AggregateCatalog agc = new AggregateCatalog();
foreach (var fi in dlls)
{
try
{
var ac = new AssemblyCatalog(Assembly.LoadFile(fi.FullName));
var parts = ac.Parts.ToArray(); // throws ReflectionTypeLoadException
agc.Catalogs.Add(ac);
}
catch (ReflectionTypeLoadException ex)
{
Elmah.ErrorSignal.FromCurrentContext().Raise(ex);
}
}
CompositionContainer cc = new CompositionContainer(agc);
_providers = cc.GetExports<IDataExchangeProvider>();

Associating an Object with other Objects and Properties of those Objects

I am looking for some help with designing some functionality in my application. I already have something similar designed but this problem is a little different.
Background:
In my application we have different Modules. Data in each module can be associated to other modules. Each Module is represented by an Object in our application.
Module 1 can be associated with Module 2 and Module 3. Currently I use a factory to provide the proper DAO for getting and saving this data.
It looks something like this:
class Module1Factory {
public static Module1BridgeDAO createModule1BridgeDAO(int moduleid) {
switch (moduleId)
{
case Module.Module2Id: return new Module1_Module2DAO();
case Module.Module3Id: return new Module1_Module3DAO();
default: return null;
}
}
}
Module1_Module2 and Module1_Module3 implement the same BridgeModule interface. In the database I have a Table for every module (Module1, Module2, Module3). I also have a bridge table for each module (they are many to many) Module1_Module2, Module1_Module3 etc.
The DAO basically handles all code needed to manage the association and retrieve its own instance data for the calling module. Now when we add new modules that associate with Module1 we simply implement the ModuleBridge interface and provide the common functionality.
New Development
We are adding a new module that will have the ability to be associated with other Modules as well as specific properties of that module. The module is basically providing the user the ability to add their custom forms to our other modules. That way they can collect additional information along with what we provide.
I want to start associating my Form module with other modules and their properties. Ie if Module1 has a property Category, I want to associate an instance From data with that property.
There are many Forms. If a users creates an instance of Module2, they may always want to also have certain form(s) attached to that Module2 instance. If they create an instance of Module2 and select Category 1, then I may want additional Form(s) created.
I prototyped something like this:
Form
FormLayout (contains the labels and gui controls)
FormModule (associates a form with all instances of a module)
Form Instance (create an instance of a form to be filled out)
As I thought about it I was thinking about making a new FormModule table/class/dao for each Module and Property that I add. So I might have:
FormModule1
FormModule1Property1
FormModule1Property2
FormModule1Property3
FormModule1Property4
FormModule2
FormModule3
FormModule3Property1
Then as I did previously, I would use a factory to get the proper DAO for dealing with all of these. I would hand it an array of ids representing different modules and properties and it would return all of the DAOs that I need to call getForms(). Which in turn would return all of the forms for that particular bridge.
Some points
This will be for a new module so I dont need to expand on the factory code I provided. I just wanted to show an example of what I have done in the past.
The new module can be associated with: Other Modules (ie globally for any instance of that module data), Other module properties (ie only if the Module instance has a certian value in one of its properties)
I want to make it easy for developers to add associations with other modules and properties easily
Can any one suggest any design patterns or strategy's for achieving this?
If anything is unclear please let me know.
Thank you,
Al
You can use springs Dependency Injection feature. This would help you achieve the flexibility of instantiating the objects using an xml configuration file.
So, my suggestion would be go with the Springs.

Is there a language with object-based access levels?

A common misconception about access level in Java, C#, C++ and PHP is that it applies to objects rather than classes. That is, that (say) an object of class X can't see another X's private members. In fact, of course, access level is class-based and one X object can effortlessly refer to the private members of another.
Does there exist a language with object-based access levels? Are they instead of, or in addition to, class-based access? What impact does this feature have on program design?
Ruby has object-based access level. Here's a citation from Programming Ruby:
The difference between "protected"
and "private" is fairly subtle, and
is different in Ruby than in most
common OO languages. If a method is
protected, it may be called by any
instance of the defining class or its
subclasses. If a method is private, it
may be called only within the context
of the calling object---it is never
possible to access another object's
private methods directly, even if the
object is of the same class as the
caller.
And here's the source: http://whytheluckystiff.net/ruby/pickaxe/html/tut_classes.html#S4
Example difference between Java and Ruby
Java
public class Main {
public static void main(String[] args) {
Main.A a1 = new A();
Main.A a2 = new A();
System.out.println(a1.foo(a2));
}
static class A
{
public String foo(A other_a)
{
return other_a.bar();
}
private String bar()
{
return "bar is private";
}
}
}
// Outputs
// "bar is private"
Ruby
class A
def foo other_a
other_a.bar
end
private
def bar
"bar is private"
end
end
a1 = A.new
a2 = A.new
puts a1.foo(a2)
# outputs something like
# in `foo': private method `bar' called for #<A:0x2ce9f44> (NoMethodError)
The main reason why no language has support for this at the semantic level is that the various needs are too different to find a common denominator that is big enough for such a feature. Data hiding is bad enough as it is, and it gets only worse when you need even more fine grained control.
There would be advantages to such a language, for example, you could mark certain data as private for anyone but the object which created it (passwords would be a great example: Not even code running in the same application could read them).
Unfortunately, this "protection" would be superficial since at the assembler level, the protection wouldn't exist. In order to be efficient, the hardware would need to support it. In this case, probably at the level of a single byte in RAM. That would make such an application extremely secure and painfully slow.
In the real world, you'll find this in the TPM chip on your mainboard and, in a very coarse form, with the MMU tables of the CPU. But that's at a 4K page level, not at a byte level. There are libraries to handle both but that doesn't count as "language support" IMO.
Java has something like this in form of the Security API. You must wrap the code in question in a guardian which asks the current SecuityManager whether access is allowed or not.
In Python, you can achieve something similar with decorators (for methods and functions) or by implementing __setattr__ and __getattr__ for field access.
You could implement this in C# by having some method capable of walking the stack and checking which object the caller is, and throwing an exception if it's not the current class. I don't know why you would want to, but I thought I'd throw it out there.