[swift-evolution] [Proposal] Foundation Swift Archival & Serialization

Matthew Johnson matthew at anandabits.com
Mon Mar 20 09:50:48 CDT 2017


> On Mar 20, 2017, at 5:34 AM, Brent Royal-Gordon <brent at architechies.com> wrote:
> 
>> On Mar 19, 2017, at 8:19 PM, Matthew Johnson <matthew at anandabits.com <mailto:matthew at anandabits.com>> wrote:
>> 
>> First, your solution does not allow a user to see a context if they can't name the type (you can't get it as Any and use reflection, etc).
> 
> What I meant is that, if you retrieve the context, you know it is of the type you expect. You don't need to *also* cast it.

Right.  What I’m saying is that if all we’re doing is moving the cast in to the encoder / decoder I don’t see value in doing that over the obvious thing of exposing the context as Any? and letting the caller cast it.  If the encoder / decoder uses the requested type in an algorithm to find the matching context then we obviously do need to pass the type as a parameter.  :)

> 
>> I don't see this restriction as being beneficial.  Second, your solution introduces several subtle problems mentioned in my last email which you didn't respond to (overlapping context types, etc).  
> 
> I mentioned that, if we give up storing values in a dictionary, we can come up with some sort of sensible-ish behavior for overlapping context types.

Oh, sorry.  I missed that the breadth-first algorithm for finding a matching context was the answer to this.

> 
>>>>> 	protocol Encoder {
>>>>> 		// Retrieve the context instance of the indicated type.
>>>>> 		func context<Context>(ofType type: Context.Type) -> Context?
>>>>> 		
>>>>> 		// This context is visible for `encode(_:)` calls from this encoder's containers all the way down, recursively.
>>>>> 		func addContext<Context>(_ context: Context, ofType type: Context.Type)
>>>> 
>>>> What happens if you call `addContext` more than once with values of the same type?
>>> 
>>> It overrides the previous context, but only for the containers created by this `encode(to:)` method and any containers nested within them.
>>> 
>>> (Although that could cause trouble for an encoder which only encodes objects with multiple instances once. Hmm.)
>>> 
>>>> And why do you require the type to be passed explicitly when it is already implied by the type of the value?
>>> 
>>> As you surmised later, I was thinking in terms of `type` being used as a dictionary key; in that case, if you stored a `Foo` into the context, you would not later be able to look it up using one of `Foo`'s supertypes. But if we really do expect multiple contexts to be rare, perhaps we don't need a dictionary at all—we can just keep an array, loop over it with `as?`, and return the first (or last?) match. If that's what we do, then we probably don't need to pass the type explicitly.
>> 
>> The array approach is better because at least there is an order to the contexts and we can assign precise semantics in the presence of overlapping context types by saying type get the first (most recent) context that can be cast to the type you ask for.  
>> 
>> That said, I think what you're really trying to model here is a context stack, isn't it?  Why don't we just do that?
> 
> You mention this a couple times, but I don't think it's really possible. Here's why.
> 
> Suppose you write these types:
> 
> 	struct SomeObjectContext {
> 		var oldFormat: Bool
> 	}
> 	
> 	struct Root: Codeable {
> 		var str: SomeStruct
> 		var obj: SomeObject
> 		
> 		func encode(to encoder: Encoder) throws {
> 			encoder.push(SomeObjectContext(oldFormat: true))
> 			
> 			let container = encoder.container(keyedBy: CodingKeys.self)
> 			try container.encode(str, forKey: .str)
> 			try container.encode(obj, forKey: .obj)
> 		}
> 		...
> 	}
> 	
> 	struct SomeStruct: Codeable {
> 		var obj: SomeObject
> 		
> 		func encode(to encoder: Encoder) throws {
> 			encoder.push(SomeObjectContext(oldFormat: false))
> 			
> 			let container = encoder.container(keyedBy: CodingKeys.self)
> 			try container.encode(obj, forKey: .obj)
> 		}
> 	}
> 	
> 	class SomeObject: Codeable {
>> 		
> 		func encode(to encoder: Encoder) throws {
> 			let context = encoder.context(ofType: SomeObjectContext.self)
> 			
> 			print(context.oldFormat)
>> 		}
> 	}
> 
> And you construct an object graph like this:
> 
> 	let object = SomeObject()
> 	
> 	let root = Root(
> 		str: SomeStruct(obj: object),
> 		obj: object
> 	)
> 
> And finally, you encode it with a coder which respects object identity, so that even if a given object appears in several different places in the object graph, it only encodes that object once.

Ahh, yes.  Objects are pesky things!  I wasn’t thinking about that because I only ever encode / decode values in a tree structure.

This brings to mind a related question about how objects will be handled.  What if I have an immutable object with value semantics?  I might actually *want* all instances of that object to be encoded independently.  The fact that the same value-semantic object is referenced in multiple places is incidental, it is not part of the data model.  Do encoders / decoders need to know if they are dealing with a value-semantic Codable object so they can do the right thing here?

> 
> Which context does `SomeObject.encode(to:)` see? The one with `oldFormat` or the one without? Is there a "first to call wins" rule? Do we think that's sustainable?
> 
> There are probably a couple of complicated rules we could use to try to cancel this out. But by far the easiest is this: The set of contexts is cast in stone at the beginning of the encoding process and cannot be altered after the fact.

Yes, I agree there are many advantages to doing that.

I think I was thrown off by this statement in your prior post: "The problem with providing all the contexts at the top level is that then the top level has to *know* what all the contexts needed are.”.  This had me thinking you were proposing that all calls to encode / decode would be grabbing context from the Codable.  I didn’t look at the code closely enough and missed that this wasn’t what was happening.

> And the simplest way I can come up with to do that, while still allowing libraries and subsystems to encapsulate their dependencies, is to let a context specify other contexts it needs.

This requires the top level to “know” about all of the needed contexts are in some sense.  But I can see how the indirection of accessing that through the single context it provides could be useful.

> 
>>> The problem with providing all the contexts at the top level is that then the top level has to *know* what all the contexts needed are. Again, if you're encoding a type from FooKit, and it uses a type from GeoKit, then you—the user of FooKit—need to know that FooKit uses GeoKit and how to make contexts for both of them. There's no way to encapsulate GeoKit's role in encoding.
>> 
>> The use cases I know of for contexts are really around helping a type choose an encoding strategy.  I can't imagine a real world use case where a Codable type would have a required context - it's easy enough to choose one strategy as the default.
> 
> Hmm...
> 
> Here's a toy version of a real problem: I have an ebook editing app which uses a package format. The text of each chapter is stored in a separate HTML file, while a "toc.plist" file specifies the order and (recursive) structure of the chapters.
> 
> This is currently all handled with Objective-C and some custom serialization code I've written, but it could be done in Swift with Codable and a context containing the chapters:
> 
> 	struct Chapter {
> 		var id: ChapterID
> 		var html: Data
>> 	}
> 	
> 	struct TOCEntry {
> 		var chapter: Chapter
> 		var children: [TOCEntry]
> 	}
> 	
> 	extension TOCEntry: Codable {
> 		class Context {
> 			var chapters: [ChapterID: Chapter]
> 		}
> 		
> 		func encode(to encoder: Encoder) throws {
> 			let context = encoder.context(ofType: Context.self)!
> 			let container = encoder.container(keyedBy: CodingKeys.self)
> 			
> 			guard context.chapters[chapter.id] == nil else {
> 				throw BookError.crosslinkedChapter(chapterID: chapter.id)
> 			}
> 			chapters[chapter.id] = chapter

Is this supposed to be `context.chapters[chapter.id] = chapter`?

> 			
> 			try container.encode(chapter.id, forKey: .chapter)
> 			try container.encode(children, forKey: .children)
> 		}
> 		
> 		init(from decoder: Decoder) throws {
> 			let context = encoder.context(ofType: Context.self)!
> 			let container = encoder.container(keyedBy: CodingKeys.self)
> 			
> 			let chapterID = try container.decode(ChapterID.self, forKey: .chapter)
> 			guard let chapter = context.chapters[chapterID] else {
> 				throw BookError.missingChapter(chapterID: chapterID)
> 			}
> 			
> 			self.chapter = chapter
> 			self.children = try container.decode([TOCEntry].self)
> 		}
> 	}
> 
> This is a type which simply would not encode or decode properly if it didn't have the right context. Of course, that breaks the "don't mutate the context" rule I've been suggesting so far, but what's a little hypocrisy between friends?

Lol.  Mutating the context is kind of ugly!  On the other hand this allows for the behavior of dynamic contexts to be recreated by users if that is the best solution to a specific problem.

> 
> * * *
> 
> But.
> 
> The question is not whether the type can encode itself without being provided with hints about the encoding strategy. The question is whether it can encode itself *correctly for the use case in question* without being provided with hints about the encoding strategy. Even if a type can somehow stuff itself into the coder without a context, that doesn't mean it will stuff itself into the coder in the format you need.

I think types that require a context to be encoded correctly are probably very rare.  There is usually a way to do something sensible as a default.  But if you’re enforcing dynamic constraints like the one you show above, sure there is a need for a context.

> 
>> That said, I can imagine really evil and degenerate API designs that would require the same type to be encoded differently in different parts of the tree.  I could imagine dynamic contexts being helpful in solving some of these cases, but often you would need to look at the codingKeyContext to get it right.
> 
> I could too—it makes me think of the infamous "PDF is not my favorite format" rant <https://github.com/zepouet/Xee-xCode-4.5/blob/master/XeePhotoshopLoader.m#L108 <https://github.com/zepouet/Xee-xCode-4.5/blob/master/XeePhotoshopLoader.m#L108>>. To be honest, I won't be terribly unhappy if our design discourages that sort of thing. :^)

I don’t think anyone would argue that this kind of thing isn’t horrible.  It is.  I don’t mind discouraging it at all.  I just don’t want to make it impossible to deal with.  Sometimes the world is messy and we have to deal with the horrible.

> 
>> If you have a concrete real world use case involving module boundaries please elaborate.  I'm having trouble imagining the details about a precise problem you would solve using dynamic contexts.  I get the impression you have something more concrete in mind than I can think of.
> 
> I don't really, but I can elaborate on the hypothetical example I've alluded to previously.
> 
> Suppose you're writing a framework to interact with a web service which is definitely not Yelp. It represents coordinates using a type from Core Location.
> 
> 	struct KelpBusiness {
>> 		var coordinates: CLLocation
>> 	}
> 
> The Kelp backend server is currently some Node.js monstrosity, but your developers are working on a 2.0 in Swift. As long as they're there, they're cleaning up a few things. One of them is that the old backend expected locations in this format:
> 
> 	"-30.000000,50.000000"
> 
> But the new one will use this instead:
> 
> 	{ "latitude": -30.0, "longitude": 50.0 }
> 
> Fortunately, CLLocation supports both of these properties—you just need to configure the CLCodingContext appropriately:
> 
> 	public struct CLCodingContext: CodingContext {
>> 		public enum JSONLocationFormat {
> 			case commaSeparatedString
> 			case subobject
> 		}
> 		public var jsonLocationFormat: JSONLocationFormat
> 	}
> 
> But the change in this context has to happen in sync with the change in servers. So you write this:
> 
> 	public struct KelpCodingContext: CodingContext {
> 		public enum Version {
> 			case v1, v2
> 		}
> 		public var version: Version = .v1
> 		
> 		public var underlyingContexts: [CodingContext] {
> 			switch version {
> 			case .v1:
> 				return CLCodingContext(jsonLocationFormat: .commaSeparatedString)
> 			case .v2:
> 				return CLCodingContext(jsonLocationFormat: .subobject)
> 			}
> 		}
> 	}
> 
> And, et voilà, you're always using the right CLCodingContext for the server version.

That makes sense.  Good example.

> 
>>> On the other hand, there *could* be a way to encapsulate it. Suppose we had a context protocol:
>>> 
>>> 	protocol CodingContext {
>>> 		var underlyingContexts: [CodingContext] { get }
>>> 	}
>>> 	extension CodingContext {
>>> 		var underlyingContexts: [CodingContext] { return [] }
>>> 	}
>>> 
>>> Then you could have this as your API surface:
>>> 
>>> 	protocol Encoder {
>>> 		// Retrieve the context instance of the indicated type.
>>> 		func context<Context: CodingContext>(ofType type: Context.Type) -> Context?
>>> 	}
>>> 	// Likewise on Decoder
>>> 	
>>> 	// Encoder and decoder classes should accept contexts in their top-level API:
>>> 	open class JSONEncoder {
>>> 		open func encode<Value : Codable>(_ value: Value, with context: CodingContext? = nil) throws -> Data
>>> 	}
>>> 
>>> And libraries would be able to add additional contexts for dependencies as needed.
>>> 
>>> (Hmm. Could we maybe do this?
>>> 
>>> 	protocol Codable {
>>> 		associatedtype CodingContextType: CodingContext = Never
>>> 		
>>> 		func encode(to encoder: Encoder) throws
>>> 		init(from decoder: Decoder) throws
>>> 	}
>>> 
>>> 	protocol Encoder {
>>> 		// Retrieve the context instance of the indicated type.
>>> 		func context<CodableType: Codable>(for instance: Codable) -> CodableType.CodingContextType?
>>> 	}
>>> 	// Likewise on Decoder
>>> 	
>>> 	// Encoder and decoder classes should accept contexts in their top-level API:
>>> 	open class JSONEncoder {
>>> 		open func encode<Value : Codable>(_ value: Value, with context: Value.CodingContextType? = nil) throws -> Data
>>> 	}
>>> 
>>> That would make sure that, if you did use a context, it would be the right one for the root type. And I don't believe it would have any impact on types which didn't use contexts.)
>> 
>> I think this is far more than we need.  I think we could just say encoders and decoders keep a stack of contexts.  Calls to encode or decode (including top level) can provide a context (or an array of contexts which are interpreted as a stack bottom on left, top on right).  When the call returns the stack is popped to the point it was at before the call.  We could also include an explicit `func push(contexts: Context...)` method on encoder and decoder to allow a Codable to set context used by all of its members.  All calls to `push` would be popped when the current call to encode / decode returns.
>> 
>> Users ask for a context from an encoder / decoder using `func context<Context>(of: Context.Type) -> Context?`.  The stack is searched from the top to the bottom for a value that can be successfully cast to Context.
> 
> Again, I don't think a stack will work.
> 
> The CodingContextType thing was a bit of a flight of fancy; I sometimes like to push a design beyond what's actually practical. What do you think of the `underlyingContexts` part?

Now that I had a second look and understand it better I like it very much.

> 
>>> I also see it as an incentive for users to build a single context type rather than sprinkling in a whole bunch of separate keys. I really would prefer not to see people filling a `userInfo` dictionary with random primitive-typed values like `["json": true, "apiVersion": "1.4"]`; it seems too easy for names to clash or people to forget the type they're actually using. `context(…)` being a function instead of a subscript is similarly about ergonomics: it discourages you from trying to mutate your context during the encoding process (although it doesn't prevent it for reference types.)
>> 
>> I agree with this sentiment and indicated to Tony the desire to steer people away from treating this as a dictionary to put a lot of stuff in and towards defining an explicit context type.  This and the fact that keys will feel pretty arbitrary are behind my desire to avoid the keys and dictionary approach.
> 
> Yes, I'm not a fan of the arbitrary keys, either.
> 
>>>> Unfortunately, I don't think there is a good answer to the question about multiple context values with the same type though.  I can’t think of a good way to prevent this statically.  Worse, the values might not have the same type, but be equally good matches for a type a user requests (i.e. both conform to the same protocol).  I’m not sure how a user-defined encoder / decoder could be expected to find the “best” match using semantics that would make sense to Swift users (i.e. following the rules that are kind of the inverse to overload resolution).  
>>>> 
>>>> Even if this were possible there are ambiguous cases where there would be equally good matches.  Which value would a user get when requesting a context in that case?  We definitely don’t want accessing the context to be a trapping or throwing operation.  That leaves returning nil or picking a value at random.  Both are bad choices IMO.
>>> 
>>> If we use the `underlyingContexts` idea, we could say that the context list is populated breadth-first and the first context of a particular type encountered wins. That would tend to prefer the context "closest" to the top-level one provided by the caller, which will probably have the best fidelity to the caller's preferences.
>> 
>> I'm not totally sure I follow you here, but I think you're describing stack-like semantics that are at least similar to what I have described.  I think the stack approach is a pretty cool one that targets the kinds of problems multiple contexts are trying to solve more directly than the dictionary approach would.
> 
> 
> What I'm saying is that an Encoder or Decoder would do something like this:
> 
> 	class MyEncoder: Encoder {
> 		private var allContexts: [CodingContext]
> 		
> 		init(context: CodingContext? = nil) {
> 			allContexts = context.map { [$0] } ?? []
> 			
> 			// We walk through the allContexts array, appending underlyingContexts to the 
> 			// end as we go. This acts as a breadth-first search; the shallowest underlyingContexts
> 			// are towards the beginning of the list.
> 			// 
> 			// We intentionally do not use high-level looping constructs so we can mutate the array.
> 			var i = 0
> 			while i < allContexts.count {
> 				allContexts += allContexts[i].underlyingContexts
> 				i++
> 			}
> 		}
> 		
> 		func context<Context: CodingContext>(of type: Context.Type) -> Context? {
> 			// We search the list from top to bottom. Since the list is ordered shallowest to deepest, 
> 			// this favors shallower contexts over deeper ones.
> 			for context in allContexts {
> 				if let matchingContext = context as? Context {
> 					return matchingContext
> 				}
> 			}
> 			return nil
> 		}
> 	}

This makes a lot of sense (I think).  It’s hard to say for sure if it would have counter-intuitive behavior in some cases but I think it would do the expected thing at leas the majority of the time.  I’m interested in hearing what the Foundation team and others think about this direction.  It seems promising to me.

> 
> -- 
> Brent Royal-Gordon
> Architechies
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-evolution/attachments/20170320/6d0d131d/attachment.html>


More information about the swift-evolution mailing list