Rust Json serialization overlapping responsibilities - json

I'm learning Json serialization in Rust, in particular, how to serialize Rust objects to Json.
Currently I see 3 methods of converting an instance of a struct to Json:
Deriving Encodable trait
Manual implementation of ToJson trait
Manual implementation of Encodable trait
Below code illustrates all 3 approaches:
extern crate serialize;
use serialize::{Encoder, Encodable, json};
use serialize::json::{Json, ToJson};
use std::collections::TreeMap;
fn main() {
let document = Document::new();
let word_document = WordDocument::new();
println!("1. Deriving `Encodable`: {}", json::encode(&document));
println!("2. Manually implementing `ToJson` trait: {}", document.to_json());
println!("3. Manually implementing `Encodable` trait: {}", json::encode(&word_document));
}
#[deriving(Encodable)]
struct Document<'a> {
metadata: Vec<(&'a str, &'a str)>
}
impl<'a> Document<'a> {
fn new() -> Document<'a> {
let metadata = vec!(("Title", "Untitled Document 1"));
Document {metadata: metadata}
}
}
impl<'a> ToJson for Document<'a> {
fn to_json(&self) -> Json {
let mut tm = TreeMap::new();
for &(ref mk, ref mv) in self.metadata.iter() {
tm.insert(mk.to_string(), mv.to_string().to_json());
}
json::Object(tm)
}
}
struct WordDocument<'a> {
metadata: Vec<(&'a str, &'a str)>
}
impl<'a> WordDocument<'a> {
fn new() -> WordDocument<'a> {
let metadata = vec!(("Title", "Untitled Word Document 1"));
WordDocument {metadata: metadata}
}
}
impl<'a, E, S: Encoder<E>> Encodable<S, E> for WordDocument<'a> {
fn encode(&self, s: &mut S) -> Result<(), E> {
s.emit_map(self.metadata.len(), |e| {
let mut i = 0;
for &(ref key, ref val) in self.metadata.iter() {
try!(e.emit_map_elt_key(i, |e| key.encode(e)));
try!(e.emit_map_elt_val(i, |e| val.encode(e)));
i += 1;
}
Ok(())
})
}
}
Rust playpen: http://is.gd/r7cYmE
Results:
1. Deriving `Encodable`: {"metadata":[["Title","Untitled Document 1"]]}
2. Manually implementing `ToJson` trait: {"Title":"Untitled Document 1"}
3. Manually implementing `Encodable` trait: {"Title":"Untitled Word Document 1"}
First method is fully automatic, but does not provide sufficient flexibility.
Second and third achieve same level of flexibility by specifying the serialization process manually. In my case I want document metadata to be serialized as an object, not as an array (which is what deriving implementation gives me).
Questions:
Why do methods 2 and 3 exist at all? I don't understand the reason for the overlap between them. I would expect there to exist only one automatic (deriving) method of serialization and one manual.
If I want manual serialization, which method should I choose and why?
Am I right in assuming that method 2 will build a Json enum in memory (besides the struct itself) and is a worse fit for huge documents (multi megabytes), while method 3 is streaming and safer for huge documents?
Why does rust stdlib use method 3 even for primitives, while not using method 2 internally?

Why do methods 2 and 3 exist at all? I don't understand the reason for the overlap between them. I would expect there to exist only one automatic (deriving) method of serialization and one manual.
Method 2 (the ToJson trait) is specific to encoding JSON. It returns JSON objects, instead of writing to a stream. One example of use is for mapping to custom representations - see this example in the documentation.
Method 3 (implementing Encodable) has to exist for method 1 to work.
Am I right in assuming that method 2 will build a Json enum in memory (besides the struct itself) and is a worse fit for huge documents (multi megabytes), while method 3 is streaming and safer for huge documents?
Yes. ToJson creates a nested Json enum of the whole object, while Encodable streams to a Writer.
If I want manual serialization, which method should I choose and why?
Why does rust stdlib use method 3 even for primitives, while not using method 2 internally?
You should use Encodable. It is not specific to the JSON format, does not use an intermediate representation (so can be streamed instead of stored in memory) and should continue to work with new formats added to the library.

Related

How to make Array and Dictionary types conform recursively to custom JSONType protocol [duplicate]

Why doesn't this Swift code compile?
protocol P { }
struct S: P { }
let arr:[P] = [ S() ]
extension Array where Element : P {
func test<T>() -> [T] {
return []
}
}
let result : [S] = arr.test()
The compiler says: "Type P does not conform to protocol P" (or, in later versions of Swift, "Using 'P' as a concrete type conforming to protocol 'P' is not supported.").
Why not? This feels like a hole in the language, somehow. I realize that the problem stems from declaring the array arr as an array of a protocol type, but is that an unreasonable thing to do? I thought protocols were there exactly to help supply structs with something like a type hierarchy?
Why don't protocols conform to themselves?
Allowing protocols to conform to themselves in the general case is unsound. The problem lies with static protocol requirements.
These include:
static methods and properties
Initialisers
Associated types (although these currently prevent the use of a protocol as an actual type)
We can access these requirements on a generic placeholder T where T : P – however we cannot access them on the protocol type itself, as there's no concrete conforming type to forward onto. Therefore we cannot allow T to be P.
Consider what would happen in the following example if we allowed the Array extension to be applicable to [P]:
protocol P {
init()
}
struct S : P {}
struct S1 : P {}
extension Array where Element : P {
mutating func appendNew() {
// If Element is P, we cannot possibly construct a new instance of it, as you cannot
// construct an instance of a protocol.
append(Element())
}
}
var arr: [P] = [S(), S1()]
// error: Using 'P' as a concrete type conforming to protocol 'P' is not supported
arr.appendNew()
We cannot possibly call appendNew() on a [P], because P (the Element) is not a concrete type and therefore cannot be instantiated. It must be called on an array with concrete-typed elements, where that type conforms to P.
It's a similar story with static method and property requirements:
protocol P {
static func foo()
static var bar: Int { get }
}
struct SomeGeneric<T : P> {
func baz() {
// If T is P, what's the value of bar? There isn't one – because there's no
// implementation of bar's getter defined on P itself.
print(T.bar)
T.foo() // If T is P, what method are we calling here?
}
}
// error: Using 'P' as a concrete type conforming to protocol 'P' is not supported
SomeGeneric<P>().baz()
We cannot talk in terms of SomeGeneric<P>. We need concrete implementations of the static protocol requirements (notice how there are no implementations of foo() or bar defined in the above example). Although we can define implementations of these requirements in a P extension, these are defined only for the concrete types that conform to P – you still cannot call them on P itself.
Because of this, Swift just completely disallows us from using a protocol as a type that conforms to itself – because when that protocol has static requirements, it doesn't.
Instance protocol requirements aren't problematic, as you must call them on an actual instance that conforms to the protocol (and therefore must have implemented the requirements). So when calling a requirement on an instance typed as P, we can just forward that call onto the underlying concrete type's implementation of that requirement.
However making special exceptions for the rule in this case could lead to surprising inconsistencies in how protocols are treated by generic code. Although that being said, the situation isn't too dissimilar to associatedtype requirements – which (currently) prevent you from using a protocol as a type. Having a restriction that prevents you from using a protocol as a type that conforms to itself when it has static requirements could be an option for a future version of the language
Edit: And as explored below, this does look like what the Swift team are aiming for.
#objc protocols
And in fact, actually that's exactly how the language treats #objc protocols. When they don't have static requirements, they conform to themselves.
The following compiles just fine:
import Foundation
#objc protocol P {
func foo()
}
class C : P {
func foo() {
print("C's foo called!")
}
}
func baz<T : P>(_ t: T) {
t.foo()
}
let c: P = C()
baz(c)
baz requires that T conforms to P; but we can substitute in P for T because P doesn't have static requirements. If we add a static requirement to P, the example no longer compiles:
import Foundation
#objc protocol P {
static func bar()
func foo()
}
class C : P {
static func bar() {
print("C's bar called")
}
func foo() {
print("C's foo called!")
}
}
func baz<T : P>(_ t: T) {
t.foo()
}
let c: P = C()
baz(c) // error: Cannot invoke 'baz' with an argument list of type '(P)'
So one workaround to to this problem is to make your protocol #objc. Granted, this isn't an ideal workaround in many cases, as it forces your conforming types to be classes, as well as requiring the Obj-C runtime, therefore not making it viable on non-Apple platforms such as Linux.
But I suspect that this limitation is (one of) the primary reasons why the language already implements 'protocol without static requirements conforms to itself' for #objc protocols. Generic code written around them can be significantly simplified by the compiler.
Why? Because #objc protocol-typed values are effectively just class references whose requirements are dispatched using objc_msgSend. On the flip side, non-#objc protocol-typed values are more complicated, as they carry around both value and witness tables in order to both manage the memory of their (potentially indirectly stored) wrapped value and to determine what implementations to call for the different requirements, respectively.
Because of this simplified representation for #objc protocols, a value of such a protocol type P can share the same memory representation as a 'generic value' of type some generic placeholder T : P, presumably making it easy for the Swift team to allow the self-conformance. The same isn't true for non-#objc protocols however as such generic values don't currently carry value or protocol witness tables.
However this feature is intentional and is hopefully to be rolled out to non-#objc protocols, as confirmed by Swift team member Slava Pestov in the comments of SR-55 in response to your query about it (prompted by this question):
Matt Neuburg added a comment - 7 Sep 2017 1:33 PM
This does compile:
#objc protocol P {}
class C: P {}
func process<T: P>(item: T) -> T { return item }
func f(image: P) { let processed: P = process(item:image) }
Adding #objc makes it compile; removing it makes it not compile again.
Some of us over on Stack Overflow find this surprising and would like
to know whether that's deliberate or a buggy edge-case.
Slava Pestov added a comment - 7 Sep 2017 1:53 PM
It's deliberate – lifting this restriction is what this bug is about.
Like I said it's tricky and we don't have any concrete plans yet.
So hopefully it's something that language will one day support for non-#objc protocols as well.
But what current solutions are there for non-#objc protocols?
Implementing extensions with protocol constraints
In Swift 3.1, if you want an extension with a constraint that a given generic placeholder or associated type must be a given protocol type (not just a concrete type that conforms to that protocol) – you can simply define this with an == constraint.
For example, we could write your array extension as:
extension Array where Element == P {
func test<T>() -> [T] {
return []
}
}
let arr: [P] = [S()]
let result: [S] = arr.test()
Of course, this now prevents us from calling it on an array with concrete type elements that conform to P. We could solve this by just defining an additional extension for when Element : P, and just forward onto the == P extension:
extension Array where Element : P {
func test<T>() -> [T] {
return (self as [P]).test()
}
}
let arr = [S()]
let result: [S] = arr.test()
However it's worth noting that this will perform an O(n) conversion of the array to a [P], as each element will have to be boxed in an existential container. If performance is an issue, you can simply solve this by re-implementing the extension method. This isn't an entirely satisfactory solution – hopefully a future version of the language will include a way to express a 'protocol type or conforms to protocol type' constraint.
Prior to Swift 3.1, the most general way of achieving this, as Rob shows in his answer, is to simply build a wrapper type for a [P], which you can then define your extension method(s) on.
Passing a protocol-typed instance to a constrained generic placeholder
Consider the following (contrived, but not uncommon) situation:
protocol P {
var bar: Int { get set }
func foo(str: String)
}
struct S : P {
var bar: Int
func foo(str: String) {/* ... */}
}
func takesConcreteP<T : P>(_ t: T) {/* ... */}
let p: P = S(bar: 5)
// error: Cannot invoke 'takesConcreteP' with an argument list of type '(P)'
takesConcreteP(p)
We cannot pass p to takesConcreteP(_:), as we cannot currently substitute P for a generic placeholder T : P. Let's take a look at a couple of ways in which we can solve this problem.
1. Opening existentials
Rather than attempting to substitute P for T : P, what if we could dig into the underlying concrete type that the P typed value was wrapping and substitute that instead? Unfortunately, this requires a language feature called opening existentials, which currently isn't directly available to users.
However, Swift does implicitly open existentials (protocol-typed values) when accessing members on them (i.e it digs out the runtime type and makes it accessible in the form of a generic placeholder). We can exploit this fact in a protocol extension on P:
extension P {
func callTakesConcreteP/*<Self : P>*/(/*self: Self*/) {
takesConcreteP(self)
}
}
Note the implicit generic Self placeholder that the extension method takes, which is used to type the implicit self parameter – this happens behind the scenes with all protocol extension members. When calling such a method on a protocol typed value P, Swift digs out the underlying concrete type, and uses this to satisfy the Self generic placeholder. This is why we're able to call takesConcreteP(_:) with self – we're satisfying T with Self.
This means that we can now say:
p.callTakesConcreteP()
And takesConcreteP(_:) gets called with its generic placeholder T being satisfied by the underlying concrete type (in this case S). Note that this isn't "protocols conforming to themselves", as we're substituting a concrete type rather than P – try adding a static requirement to the protocol and seeing what happens when you call it from within takesConcreteP(_:).
If Swift continues to disallow protocols from conforming to themselves, the next best alternative would be implicitly opening existentials when attempting to pass them as arguments to parameters of generic type – effectively doing exactly what our protocol extension trampoline did, just without the boilerplate.
However note that opening existentials isn't a general solution to the problem of protocols not conforming to themselves. It doesn't deal with heterogenous collections of protocol-typed values, which may all have different underlying concrete types. For example, consider:
struct Q : P {
var bar: Int
func foo(str: String) {}
}
// The placeholder `T` must be satisfied by a single type
func takesConcreteArrayOfP<T : P>(_ t: [T]) {}
// ...but an array of `P` could have elements of different underlying concrete types.
let array: [P] = [S(bar: 1), Q(bar: 2)]
// So there's no sensible concrete type we can substitute for `T`.
takesConcreteArrayOfP(array)
For the same reasons, a function with multiple T parameters would also be problematic, as the parameters must take arguments of the same type – however if we have two P values, there's no way we can guarantee at compile time that they both have the same underlying concrete type.
In order to solve this problem, we can use a type eraser.
2. Build a type eraser
As Rob says, a type eraser, is the most general solution to the problem of protocols not conforming to themselves. They allow us to wrap a protocol-typed instance in a concrete type that conforms to that protocol, by forwarding the instance requirements to the underlying instance.
So, let's build a type erasing box that forwards P's instance requirements onto an underlying arbitrary instance that conforms to P:
struct AnyP : P {
private var base: P
init(_ base: P) {
self.base = base
}
var bar: Int {
get { return base.bar }
set { base.bar = newValue }
}
func foo(str: String) { base.foo(str: str) }
}
Now we can just talk in terms of AnyP instead of P:
let p = AnyP(S(bar: 5))
takesConcreteP(p)
// example from #1...
let array = [AnyP(S(bar: 1)), AnyP(Q(bar: 2))]
takesConcreteArrayOfP(array)
Now, consider for a moment just why we had to build that box. As we discussed early, Swift needs a concrete type for cases where the protocol has static requirements. Consider if P had a static requirement – we would have needed to implement that in AnyP. But what should it have been implemented as? We're dealing with arbitrary instances that conform to P here – we don't know about how their underlying concrete types implement the static requirements, therefore we cannot meaningfully express this in AnyP.
Therefore, the solution in this case is only really useful in the case of instance protocol requirements. In the general case, we still cannot treat P as a concrete type that conforms to P.
EDIT: Eighteen more months of working w/ Swift, another major release (that provides a new diagnostic), and a comment from #AyBayBay makes me want to rewrite this answer. The new diagnostic is:
"Using 'P' as a concrete type conforming to protocol 'P' is not supported."
That actually makes this whole thing a lot clearer. This extension:
extension Array where Element : P {
doesn't apply when Element == P since P is not considered a concrete conformance of P. (The "put it in a box" solution below is still the most general solution.)
Old Answer:
It's yet another case of metatypes. Swift really wants you to get to a concrete type for most non-trivial things. [P] isn't a concrete type (you can't allocate a block of memory of known size for P). (I don't think that's actually true; you can absolutely create something of size P because it's done via indirection.) I don't think there's any evidence that this is a case of "shouldn't" work. This looks very much like one of their "doesn't work yet" cases. (Unfortunately it's almost impossible to get Apple to confirm the difference between those cases.) The fact that Array<P> can be a variable type (where Array cannot) indicates that they've already done some work in this direction, but Swift metatypes have lots of sharp edges and unimplemented cases. I don't think you're going to get a better "why" answer than that. "Because the compiler doesn't allow it." (Unsatisfying, I know. My whole Swift life…)
The solution is almost always to put things in a box. We build a type-eraser.
protocol P { }
struct S: P { }
struct AnyPArray {
var array: [P]
init(_ array:[P]) { self.array = array }
}
extension AnyPArray {
func test<T>() -> [T] {
return []
}
}
let arr = AnyPArray([S()])
let result: [S] = arr.test()
When Swift allows you to do this directly (which I do expect eventually), it will likely just be by creating this box for you automatically. Recursive enums had exactly this history. You had to box them and it was incredibly annoying and restricting, and then finally the compiler added indirect to do the same thing more automatically.
If you extend CollectionType protocol instead of Array and constraint by protocol as a concrete type, you can rewrite the previous code as follows.
protocol P { }
struct S: P { }
let arr:[P] = [ S() ]
extension CollectionType where Generator.Element == P {
func test<T>() -> [T] {
return []
}
}
let result : [S] = arr.test()

How to write a function that takes a slice of functions?

I am trying to write a function that takes a slice of functions. Consider the following simple illustration:
fn g<P: Fn(&str) -> usize>(ps: &[P]) { }
fn f1() -> impl Fn(&str) -> usize { |s: &str| s.len() }
fn f2() -> impl Fn(&str) -> usize { |s: &str| s.len() }
fn main() {
g(&[f1(), f2()][..]);
}
It fails to compile:
error[E0308]: mismatched types
--> src/main.rs:6:15
|
6 | g(&[f1(), f2()][..]);
| ^^^^ expected opaque type, found a different opaque type
|
= note: expected type `impl for<'r> std::ops::Fn<(&'r str,)>` (opaque type)
found type `impl for<'r> std::ops::Fn<(&'r str,)>` (opaque type)
Is there any way to do this?
Your problem is that every element of the array must be of the same type, but the return of a function declared as returning impl Trait is an opaque type, that is an unspecified, unnamed type, that you can only use by means of the given trait.
You have two functions that return the same impl Trait but that does not mean that they return the same type. In fact, as your compiler shows, they are different opaque types, so they cannot be part of the same array. If you were to write an array of values of the same type, such as:
g(&[f1(), f1(), f1()]);
then it would work. But with different functions, there will be different types and the array is impossible to build.
Does that mean there is no solution for your problem? Of course not! You just have to invoke dynamic dispatch. That is you have to make your slice of type &[&dyn Fn(&str) -> usize]. For that you need to do two things:
Add a level of indirection: dynamic dispatching is always done via references or pointers (&dyn Trait or Box<dyn Trait> instead of Trait).
Do an explicit cast to the &dyn Trait to avoid ambiguities in the conversion.
There are many ways to do the cast: you can cast the first element of the array, or you can declare the temporary variables, or give the slice a type. I prefer the latter, because it is more symmetric. Something like this:
fn main() {
let fns: &[&dyn Fn(&str) -> usize] =
&[&f1(), &f2()];
g(fns);
}
Link to a playground with this solution.

Can you share `cfg!` up the dependency tree using Cargo?

I have a Rust library that has some platform abstractions in the form of "backends". The library uses a build.rs to do some platform checks and sets some compile time configuration variables according to what backends can be built. Then in the code, backend code is guarded like this:
#[cfg(backend1)]
struct Backend1 { ... }
#[cfg(backend2)]
struct Backend2 { ... }
...
A consumer of this library wants to instantiate a backend suitable for the current platform. Ideally you'd do something like:
fn get_backend() -> Box<Backend> {
#[cfg(backend1)]
return mylib::backends::Backend1::new(...);
#[cfg(backend2)]
return mylib::backends::Backend2::new(...);
...
}
However, the configuration variables inside mylib do not get shared to consumers, so #[cfg(backend1)] won't work as expected.
Is there a way to achieve the desired behaviour without requiring manual intervention for the person building the consumer of the library? I don't want users to have to manually pass a list of available backends. It seems this should be automatable.
Note that the structs for backends not built-in to mylib are totally absent, meaning that consumers can't reference them. Consumers would need to use conditional compilation to ensure that only backends built in to mylib are referenced.
There may be more than one backend for any given platform, and in that case, the consumer should be able to choose which one.
You do not have access to a library's config from outside.
You will never be able to know the concrete types of the backends from consumer code, so you have to come up with some machinery to be able to construct them, taking into account the different needs of each of their constructors.
The basic idea here, is to introduce a context, something like a dependency injection context that you might use in an object-oriented language. The context holds values that might be needed by a constructor.
To create trait objects, you need a trait:
pub trait Backend {
// all the common stuff for backends
}
A trait for constructing backends, and a struct to hold all the possible configuration variables needed by these backends. This can't be the same trait as Backend because the new method prevents it from being made into an object. Most of the variables are optional, since not all backends need them:
pub trait BackendContstruct {
fn new(ctx: &BackendContext) -> Result<Box<Backend>, BackendError>;
}
pub struct BackendContext<'a> {
var_1: Option<&'a str>,
var_2: Option<&'a str>,
another: Option<bool>,
// etc
}
If you provide the wrong variables, then you need to get an error back. Making the construction dynamic unfortunately means that errors are runtime instead of at compile-time:
pub struct BackendError(String);
The availability of each backend depends on platform support. So make their definitions dependant on the platform:
#[cfg(platform1)]
mod backend1 {
pub struct Backend1;
impl ::Backend for Backend1 {}
impl ::BackendContstruct for Backend1 {
fn new(ctx: &::BackendContext) -> Result<Box<::Backend>, ::BackendError> {
if ctx.var_1.is_none() {
Err(::BackendError("Backend1 requires val_1 to initialize".to_string()))
} else {
Ok(Box::new(Backend1 {}))
}
}
}
}
#[cfg(platform1)]
#[cfg(platform2)]
mod backend2 {
pub struct Backend2;
impl ::Backend for Backend2 {}
impl ::BackendContstruct for Backend2 {
fn new(ctx: &::BackendContext) -> Result<Box<::Backend>, ::BackendError> {
Ok(Box::new(Backend2 {}))
}
}
}
None of the concrete types are public, and any might just not exist. So provide an enum so that consumers can specify which backend they want:
pub enum BackendType {
// these names are available in all configurations
Default, Backend1, Backend2, Backend3
}
And a function for constructing the backends. It will be an Err to request an unsupported backend or to miss out required variables in the context. Consumers should be encouraged to use the Default variant, which should have a valid backend on any platform:
pub fn create_backend(backend: BackendType, ctx: &BackendContext) -> Result<Box<Backend>, BackendError> {
match backend {
#[cfg(platform1)]
#[cfg(platform2)]
BackendType::Default => Backend2::new(ctx),
#[cfg(platform1)]
BackendType::Backend1 => Backend1::new(ctx),
#[cfg(platform1)]
#[cfg(platform2)]
BackendType::Backend2 => Backend2::new(ctx),
_ => Err(BackendError("Backend not available".to_string()))
}
}
You could have a function that returns a list of backends available.
Then, when an app wants to use your library, it can call that function from its build.rs, select one of the backends available, and pass it as an option to the compiler.

Rust scoping rules for struct-owned functions

I am trying to understand what exactly the scope is for functions defined within an impl block but which don't accept &self as a parameter. For example, why doesn't the following chunk of code compile? I get the error "cannot find function generate_a_result in this scope".
pub struct Blob {
num: u32,
}
impl Blob {
pub fn new() -> Blob {
generate_a_result()
}
fn generate_a_result() -> Blob {
let result = Blob {
num: 0
};
result
}
}
These functions are called associated functions. And they live in the namespace of the type. They always have to be called like Type::function(). In your case, that's Blob::generate_a_result(). But for referring to your own type, there is the special keyword Self. So the best solution is:
Self::generate_a_result()

the impl does not reference any types defined in this crate

In one of my structs Struct1 there's a field of type time::Tm. Somewhere in the code an instance of the struct gets decoded from a json string to an instance of Struct1.
json::decode(&maybe_struct1)
At first I got an error saying that to_json isn't implemented for time::Tm. So I implemented it:
impl json::ToJson for time::Tm {
fn to_json(&self) -> json::Json {
let mut d = BTreeMap::new();
d.insert("tm_sec".to_string(), self.tm_sec.to_json());
d.insert("tm_min".to_string(), self.tm_min.to_json());
//..............
And now it says
error: the impl does not reference any types defined in this crate; only traits defined in the current crate can be implemented for arbitrary types [E0117]
I gathered that it was a bug before version 1. But it is still? If not how do I fix it?
It's not a bug - it is feature. Really. To implement trait any of this statements must be true:
type is declared in this module
trait is declared in this module
otherwise you get E0117. You can find more info using rustc --explain E0117:
This error indicates a violation of one of Rust's orphan rules for trait implementations. The rule prohibits any implementation of a foreign trait (atrait defined in another crate) where
the type that is implementing the trait is foreign
all of the parameters being passed to the trait (if there are any) are also foreign.
Here's one example of this error:
impl Drop for u32 {}
To avoid this kind of error, ensure that at least one local type is referencedby the impl:
pub struct Foo; // you define your type in your crate
impl Drop for Foo { // and you can implement the trait on it!
// code of trait implementation here
}
impl From<Foo> for i32 { // or you use a type from your crate as
// a type parameter
fn from(i: Foo) -> i32 { 0 }
}
Alternatively, define a trait locally and implement that instead:
trait Bar {
fn get(&self) -> usize;
}
impl Bar for u32 {
fn get(&self) -> usize { 0 }
}
For information on the design of the orphan rules, see RFC 1023.
EDIT:
To achieve what you want you have 2 solutions:
implement ToJson on your whole type:
impl json::ToJson for Struct1 { … }
create a wrapper type struct TmWrapper(time::Tm); and implement 2 traits for it From and ToJson.
EDIT 2:
Step by step explonation:
This is what you want to achieve: http://is.gd/6UF3jd
Solutions:
implement trait on whole type using types that you want: http://is.gd/CRfPeJ
create wrapper type and implement trait that you want on it: http://is.gd/7XV5w9
This is exactly what is described in explanation of the error code above.
So you see - at least one of trait or struct must be declare within current unit to allow implementation of trait on given type. In your case both of them are external types and that is what is Rust preventing. If you want to achieve something like that you need to use some hacks as described above.
It's not a bug. The language requires when you implement a trait that you defined either the trait or the type involved. If you implemented neither, it doesn't allow it.
If it didn't, it would be possible for someone else to come along and also implement json::ToJson for time::Tm, and suddenly the compiler has no idea which code to use.
The simplest way to work around this is to wrap your time::Tm in a newtype, like so:
struct TmWrap(time::Tm);
Now, because you defined this type, you can implement json::ToJson on it. This does mean you have to wrap/unwrap the Tm constantly, but the only other alternative is to implement ToJson for the entire containing Struct1 type, which is probably going to be even more work.