Design decisions / goals / philosophy of Swift

Something concrete to start with: In the standard library, this particular flatMap

extension SequenceType {
    /// Return an `Array` containing the non-nil results of mapping
    /// `transform` over `self`.
    ///
    /// - Complexity: O(*M* + *N*), where *M* is the length of `self`
    ///   and *N* is the length of the result.
    @warn_unused_result
    @rethrows public func flatMap<T>(@noescape transform: (Self.Generator.Element) throws -> T?) rethrows -> [T]
}

makes it clear that flatMap in Swift is more like the flatMap in Scala than the bind (>>=) in Haskell.

The reason for using the stricter definition of bind/flatMap (as in Haskell) where eg

(transform: Element -> [T]) -> [T] // is a flatMap
(transform: Wrapped ->  T?) ->  T? // is a flatMap
(transform: Wrapped -> [T]) ->  T? // is not a flatMap (but still a useful function)
(transform: Element ->  T?) -> [T] // is not a flatMap (but still a useful function)

is AFAICS that it is an essential part of a software system that allows for clearer, simpler and more reusable code (as it makes monadic types more composable).


Some people think the above SequenceType method should not have been named flatMap because "it is not a flatMap". They still think this method is useful and has its place in the standard library, but it should not be called flatMap. And now they fear that Swift is on a slippery slope, and will just be a mishmash of broken off pieces from various programming disciplines, joined together in an incoherent mess.


It would be interesting to hear some of the reasoning behind this particular design choice, as well as a more general discussion about the overall design goals of Swift and its type system, the pros and cons of various different design decisions/approaches taken in Swift, compared to eg Rust, Scala, Haskell, Go, Clojure, etc.

Well it is rather clear from all Apple did with Swift till now that Swift is not indendet to be a FP language but a oop language with some useful fp features mixed in.


I think the vision is: use what is in usage helpful and don't follow dogmatism.

I largely agree, and "properly done" OOP and FP are just two sides of the same good thing, and "improperly done" OOP and FP are just two sides of the same bad thing.


But I think there is a point to be made along these lines:


The "dogmatism" of eg algebra has served mathematics well, and the "dogmatism" of mathematics has been of great use in the other sciences as well as in the world in general.


Non-rule-based, incoherent mathematics wouldn't be that fun to use, nor would it be a very powerful tool. And I suppose you and (the rest of the world) would not be particularly found of a non-dogmatic definition of eg the + - * / operators.


Why should a computer language not benefit from eg algebraic datatypes and a solid "mathematical" foundation? Why shouldn't it strive to make it easy to write reusable and composable building blocks using well studied/understood concepts?

Because mathematics in huge parts just plainly fails in describing reality.

What concept/tool would you say is more indispensable than mathematics when it comes to describing/manipulating reality?


The language of mathematics probably evolved / was invented exactly because people have always felt the need for a more effective way to describe and manipulate reality.


Natural language is great for some types of reality description/manipulation, but very inefficient/messy/impractical for (a lot) of other types, hence we have mathematics.


Without mathematics, there would be no airplanes, no telephones, no space travel, and definitely no computers or programming languages. It would be impossible to build/invent all these real things using only natural language, as you would have to reinvent mathematics in the process.


Mathematics is a "strict" and powerful concept (highly composable/regular/reusable, as are eg bricks, or legos).


Natural language is a "sloppy" and powerful concept (powerful because it too is highly composable/regular/reusable due to the emergent rules of natural language (grammar, etc), also note that children often extrapolate rules/patterns until they learn the irregular verbs etc).


I don't know if natural languages would be "better" without irregular verbs and other such warts, or if all natural languages evolve to have less irregularities over time. The sloppyness of natural languages are probably good for things like imagination/fantasy/fun/exploration.


But I do know that our current positional number system certainly is "better" (more powerful/effective/practical/simple/deep/elegant) than eg roman numerals. The romans would probably say their crude numerals were the pragmatic choice.

Yes, mathematics are important. But they fail for such simple things as describing a GUI and state. It's not by chance that fp gets real ugly in these areas and needs such sick stuff like monads.

I think that's a bit like saying: Biology is important, but it fails for simple things like describing green plants. It's not by chance that biology gets really ugly in these areas and needs sick stuff like photosynthesis.


Yes, managing the GUI and state of a simple app is easy no matter what tools/concepts you use. But managing sufficiently complex GUIs together with some database state and so on, is not "simple things", but it can be made simpler by using good tools/concepts. And it's not by chance that many/(most?) large such systems (eg Facebook et al) tend to use some form of "functional programming (inspired)" approaches for managing exactly these things (using eg React or some other form of futures, streams, signals, reactive / flow based things). This, and the fact that mainstream programming languages adds lambdas, map, filter, etc is not just some short lived craze, it's because they are useful, as is the concepts of various classic data structures, and OOP, FP, delegates, and monads.


Monads are not some necessary evil to handle state. It's a fairly simple, natural and old concept that among other things plays an important role in handling complex state wihout ending up with a mess. Given enough time and and hard to solve problems, any programmer would discover/reinvent the concept of monads by themselves and just think of them as a tool like any other, yet for some reason "the m-word" is supposed to be scary and hard.


In Swift, you use monads when you use Arrays and Optionals for example, and you don't need to know category theory to use monads any more than you need to know group and set theory to use addition. But it can of course be good to know these things, in the same way that it is good for an animator to know a lot about anatomy, data compression, optics, etc.


For a total newcomer to programming in general, it would probably be harder to understand the concepts of eg C's for-loops, pointers and main-function, than the concepts of functions, functors, monoids, monads, etc. If a seasoned programmer thinks the latter is hard to understand and irrelevant no matter how it is presented, then I think it has more to do with habitual thinking than objective complexity/strangeness. (No it's not me in that talk, he just happens to also be called Jens.)

Am I correct in thinking that your complaint is essentially that flatMap() is not equivalent to map() followed by flatten()?


Sometimes it's described that way (by other comments in the stdlib), but in the case where the supplied transform produces an Optional, the flatten() seems to be replaced by a non-nil filter and unwrapping. flatten() does not seem to be defined on Optionals, nor on sequences of Optionals, so the follow-up operation performed by flatMap() in the case of optional-producing transforms has no apparent relationship to "flattening".


If that's the crux of your argument, I agree; the current definitions seem inconsistent and kind of random to me. But I don't have much background in FP. You mention that Swift's flatMap() seems similar to that of Scala. So what's Scala's excuse?

Yes, the following excerpt from the standard library clearly shows the two different definitions of flatMap:


extension SequenceType {
    /// Return an `Array` containing the concatenated results of mapping
    /// `transform` over `self`.
    ///
    ///     s.flatMap(transform)
    ///
    /// is equivalent to
    ///
    ///     Array(s.map(transform).flatten())
    ///
    /// - Complexity: O(*M* + *N*), where *M* is the length of `self`
    ///   and *N* is the length of the result.
    @warn_unused_result
    @rethrows public func flatMap<S : SequenceType>(transform: (Self.Generator.Element) throws -> S) rethrows -> [S.Generator.Element]
}

extension SequenceType {
    /// Return an `Array` containing the non-nil results of mapping
    /// `transform` over `self`.
    ///
    /// - Complexity: O(*M* + *N*), where *M* is the length of `self`
    ///   and *N* is the length of the result.
    @warn_unused_result
    @rethrows public func flatMap<T>(@noescape transform: (Self.Generator.Element) throws -> T?) rethrows -> [T]
}


Perhaps it would be less confusing if the latter was called eg mapSome rather than flatMap.


Regarding Scalas (and Swift's) interpretation / definition of flatMap, I guess you could explain/motivate it by saying that it is used when you want to remove one layer of "Collectionness" from the results of your transform. And the notion of "collectioness" here would then apply to eg Optionals too, since an Optional can be seen as a collection of one or zero elements.


But that would not explain the choice to do away with, or disregard, the possible advantages of having a flatMap that is a monadic bind, which could work as one of a few general and powerful building blocks of a software system.


There are a lot of discussions to be found on this topic, like eg:

http://igstan.ro/posts/2012-08-23-scala-s-flatmap-is-not-haskell-s.html

and there is this rant by Gordon Fontenot over the introduction of this particular flatMap in Swift:

http://buildphase.fm/90

which is perhaps best listened to after watching the following talk, which he did before the introduction of the new flatMap and the podcast rant:

https://vimeo.com/132657092


But I think It would be interesting to hear some more substantial arguments as to why eg Gordons views on this are "wrong".

"Because mathematics in huge parts just plainly fails in describing reality."


I'm sure Sir Isacc Newton would be very surprised to hear this.


Actually, what mathematics is extremely good at is describing the relationships between things. Which is what declarative, e.g. functional, programming languages also do - allowing you to describe the relationships between your inputs and outputs, while leaving it up to the machine to figure out how/when/if/where to perform the actual calculations. What math is not so good at is describing relationships between state and time. Mind you, neither are your imperative programming languages - the fact that they can do it at all doesn't mean they actually do it well. Especially when non-determinism gets involved - i.e. anything other than a single-threaded single process that never interacts with shared resources - a naive conceit that is comically unrealistic in this modern ubiquitously networked world, but which a great many programmers seem to cling to anyway.


If anything, imperative languages are the best argument for eliminating, containing, and minimizing use of state as much as humanly (and inhumanly) possible, because if a system's too complex for a human to reason correctly about, then it had better be strict enough that the machine can. That, for example, means creating small, focused, declarative DSLs for common tasks, such as wiring GUI Views to stateful Models, where it's more reliable for the human to describe the relationships between the inputs and outputs and let the machine figure out what to recalculate and refresh when, than the other way about. Eliminate statefulness and provide a nice, concise, expressive algebraic syntax, and you won't fix all the woes of software development, but you will eliminate very specific subsets, leaving developers more time to work on the remaining messy that can't be automated away.


Unfortunately, Swift firmly screws the pooch on this, because it is a big complex ball-of-mud language that doesn't believe in metaprogramming, never mind vigorously support it. You can't just cut Swift down, say, to the minimum subset of 'safe' features only, and then build up your own declarative DSL on that. Even if you could, its bigball-of-mud-ness makes it impossible to guarantee that something internal won't sneak about puncturing holes in the abstractions you do build. Any pretense at mathematical rigor is inevitably compromised by design and implementation; heck, you can't even express simple, valid requirements such as `[Hashable:Foo]` or cast from `Any` to `as? [Any]`, without it getting upset. (And don't get me started on the value vs reference debacle; as if C and Java weren't enough of a lesson on the eternal pain caused by that nonsense.) It's a reactive design, discovering and dealing with problems as they arise, instead of designing it so such problems cannot exist in the first place. It's going to take years just to shake out all the bugs and holes and other problems, and that's assuming they don't spend more time slapping on even more features (which programmers love) instead of ironing out the ones they've already got (which programmers hate). To quote Tony Hoare:


"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult."


The best description of Swift I've seen is "Compiler Oriented Design". Another good description is an endlessly sprawling mish-mash of special cases always falling just short of the general. Parsimonious it is not, and that means we're all going to be paying a stupidly high price in the long run, just because it was cheaper for its authors to make it complex than simple in the short. Originally Swift was just Lattner having some fun inventing his own language (which is fine and good exercise); quite how it got from there to here speaks more of hubris and wishful optimism than steely-nosed pragmatism, along with the industry's standard (and by now no doubt ISO-standardized) failure to heed Fred Brooks' "write one to throw away" mantra.


TBH, I wish Apple'd looked at Swift, said "that's nice", and then just gone ahead and made a 'safe' ObjC, much as Cyclone was a 'safe C' or C# a safe 'replacement' for C++, because Swift simply isn't enough of an improvement over that to justify the years of awkwardness and inconsistency that are going to ensue just because it wants to stand out. Ultimately, it's the Cocoa APIs, not the language, where all the real value is, and all they've really done is de-value that asset in return for an unproven handful of beans. But then, no-one ever got famous or promoted by taking a conservative, continuous non-disruptive approach to language design, which is why the languages we do get completely **** at evolution, and why all our revolutions turn out to look much the same as before, only with a newer paint coat on top.

I think it's important to be clear here: contrary to any popular programmer memes currently in circulation (cough), Swift is not in any way functional and cannot do functional programming, period. Swift is an imperative language, and imperative means the user has to state every single computation in the exact order it's to be performed, and the machine has to follow those instructions to the letter because it simply doesn't have the insight or guarantees necessary to make intelligent decisions about what needs to be done and when for itself.


Declarative programming is higher level programming, where you state what you want - e.g. as a set of logic rules, or pipeline transformations, or input-output relationships - and it's left to the machine to figure out exactly when, where, and what calculations it needs to perform in order to produce the requested result. Thus what distinguishes a language like Haskell from Swift or C is not all the things it can do but the things it cannot, because it is those restrictions that enable the machine to reason fully and correctly about the code, and thus make decisions about what to do and when on your behalf. This includes very powerful decisions such as when to defer evaluation unless/until actually needed, when to memoize results to avoid the cost of recalculating later, when to parallelize calculations across multiple processor cores for speed, and it's all completely automatic and done for you. You give up precise control, but gain greater power and simplicity in return.


To use an analogy: think of declarative programming models as automating the task of determining the exact sequence of low-level operations, just as garbage collection automates the task of low-level memory management. Automation is not appropriate in all tasks, of course, but for those where it is, once it's up and running correctly it is very hard to beat.

Please get started on the value vs reference debacle.

Thanks, although I'm not entirely sure what you are being clear about and why. I do know that Swift is not a functional programming language. But no matter what such labels we use, most programming languages makes it possible to write abstractions that will allow code to be executed in some other order than in which it was written. Some obvious examples are the Cocoa target-action design pattern and NSOperations. So when you say:


Imperative means the user has to state every single computation in the exact order it's to be performed, and the machine has to follow those instructions to the letter because it simply doesn't have the insight or guarantees necessary to make intelligent decisions about what needs to be done and when for itself.


I wonder how you would categorize asynchronous function calls, remote procedure calls, frameworks like Rective Extensions, etc.


So we can always use an imperative language and its fork or dispatch_… functions along with any closures or function pointers or whatever in order to write higher level constructs that can then be used to program in a more or less declarative style. It's just a matter of abstractions, code constructs and disciplines. These features can be built-in, written by users, encouraged, discouraged, easy or hard to write. We can call various code constructs and/or disciplines functional, declarative, FRP, OOP, POP, MVC, MVVM and whatever, but in the end all these things can of course be accomplished even if we started out from eg just assembler.


Here is an excerpt from how Apple presents Swift (on developer apple com slash swift):

Swift has many other features to make your code more expressive:

  • Closures unified with function pointers
  • Tuples and multiple return values
  • Generics
  • Fast and concise iteration over a range or collection
  • Structs that support methods, extensions, and protocols
  • Functional programming patterns, e.g., map and filter
  • Native error handling using try / catch / throw

And Chris Lattner's website says that Swift is "drawing ideas from Objective-C, Rust, Haskell, Ruby, Python, C#, CLU, and far too many others to list."

All I'm saying is that these "functional programming patterns" of Swift and their relation to the various protocols and types in the standard library can be realised in different ways, and some ways are nice/simple/powerful, other ways are messy/complex/less powerful. This is not about Swift being a "functional" or "imperative" language, it's about the design choices made for the "functional programming patterns"- and "protocol oriented"-aspects of the standard library and language.


And since people are writing and using for example Signals in Swift, it should be clear that what I'm asking for/about is not something that is impossible because Swift is "imperative" or something like that. This is simply about what design decisions are being made and the motivations behind them.

Actually it turns out that there is a motivation behind this, and it is the "collectioness" of Optional that I speculated about above.

It's in here: https://github.com/typelift/Swiftz/issues/245


So I guess they will make Optional conform to SequenceType.

Also see for example this about professional "functional programming" in "imperative languages".

Design decisions / goals / philosophy of Swift
 
 
Q