Watching Haskell Talks: Alain Odea by FP Complete.
"I didn't actually know there were real problems with Java... I felt that Java - coming into learning Haskell - I felt that Java was a really strong language... but as soon as I started learning Haskell, I felt like there were barriers everywhere.... lots of extra code that you have to write. It's hard to do certain kinds of abstractions in terms of functional decomposition and refactoring... It's just easier to write code, and it's easier to write good code."
"You have to write so fewer tests."
"The challenge I had with Erlang was having confidence that it was still working."
"The big challenge for me in going from more liberal languages is that ... Haskell requires you to write true functional programs. You can't just write to the disk, read from the network. You have to actually organize that... Haskell requires you to think differently."
Watching Haskell Talks: Alain Odea by FP Complete.
"[Yesod] The idea that with a type system you can get a secure web application..."
In which Alain tried to do XSS vulnerability in the Yesod-written application, and it wasn't easy. Interesting.
Watching Haskell Talks: Alain Odea by FP Complete.
"Haskell make it possible to do these small things easily and these large things really really well."
This thought stream serves as my Haskell commentary center and my Haskell story.
Regarding ThoughtStream as a medium - it's lightweight enough so that I'm not tempted to try to add more to it than content (setting up a blog to get it just right can be rather time consuming), but has enough features that I don't feel limited. I know that if I want to drop in some Markdown to organize the content a wee bit better, I can do that.
At a later time, I'll speak to why I'm so interested in Haskell. I'm an enthusiast, an advocate, and a novice!
Listening to: Haskell Cast, Episode 3: Simon Peyton Jones on GHC
Commentary follows below.
It becomes very difficult to change a language when it becomes embedded in many products. Early in the lifetime of Haskell, changes to the standard library were made more liberally. As Haskell becomes more successful, it will lose a little less nimble.
"It's not language changes that's most difficult to manage, it's library changes. It's one reason we recently established the core libraries committee."
GHC has supported type level functions. f Int = Char
, gives us associated types. Introduced as a result of a paper comparing type-level programmability features across languages. Haskell was missing associated types.
With a closed-type family, you can give an order for equations. As a result, you need fewer equations.
Rationale: solve Cabal hell by introduing a level of abstraction.
199x: Started with an ML-like module system as a notion, but turned out to be too much trouble in early language evolution, so went with simplest thing that could possibly work.
Fast forward: 2005. Needed something to indicate a minimum unit of distribution - a Cabal package. Cabal packages as a metamodule. The idea behind Backpack is how to make Haskell packages depend on APIs rather than concrete implementations. It's not about the Haskell language, perse, but rather a way to tackle the Cabal hell. It'd be a big change, because it would change how the distribution of Haskell packages.
Status: it's not yet implemented.
"Cabal hell is a real place, a very real place".
"How can more people get involved?"
GHC is everyone's compiler; it's time to work on this together. 14 new commiters have joined in since 2013.
New pieces that are growing: backend code generation, data parallel programming, template Haskell, parallel I/O manager, dynamic linking. There's a lot of room for more ideas.
On BDFLs - there has to be a way to make decisions where universal consensus isn't possible. On the compiler, SPJ will serve as a BDFL of sorts. The core libraries committee will serve as BDFL where library decisions are concerned.
Array fusion as a way to abstract away low-level details of vector operations, as well vector instructions.
If you want to program a parallel machine, start with a functional language. To make things actually run fast, it takes a lot of iteration.
There's also large scale concurrency in Haskell, like Cloud Haskell.
There's also small scale concurrency in Haskell, like forkIO and MVars. This is the process-level communication. There's an approach that uses LVars (lattice-based flows) that might introduce determinism in these types of calculations.
Purely functional parallelism: Repa, Data Parallel Haskell, Accelerate, etc.
GHC out of the box already supports parallelism.
Part of the parallelism problem is that someone can take (with a lot of effort) a program in a low-level language and access all the parallelism in a given machine. The challenge in a functional, high-level language is to find a way to do this cleanly and within a few constant factors of the speed of the low-level implementation.
You can't screw around with side effects, because the ordering of effects is not obvious in a lazy language.
Being able to prove that pattern match failings don't exist would be great. There's some research being done at this point. We can be reasonably certain that a Haskell program won't segfault, but we're still at a point where a program might fail at runtime because there might be a failure to pattern match.
Getting a handle on newtype and coercions. It's possible to convert from an Age
to and Int
at runtime for free. It's not possible to convert from a [Age]
to [Int]
for free at runtime - this would require a map
.
This is where Roles come in. It requires a new form of equality.
GHC 7.8 exposes some of these ideas in an experimental form.
Over time, hope to build more type level reasoning into GHC. Type-level naturals are being added, allowing to express ideas such a vector of length 3 and do basic operations on that. In future versions of GHC, there'll be support for more such operations.
Going beyond word processors and database management in schooling, and trying to reform ICT computer science curriculum as a "foundational discipline".
Functional programming is still a foreign concept to many teachers. A separate exercise from the Computing in School project would be to establish functional programming as a medium for carrying the concepts of computer science. However, less work is being done on this end because the goal is to get computer science into curriculums at all.
Haskell is growing. I'm pretty excited about the developments in the realm of concurrency, myself. Given that I develop APIs as part of my day-to-day, having easy to use, parallelizable concurrency primitives in Haskell makes it look very tempting. Furthermore, having strong guarantees available at compile-time make me feel more comfortable that things will work when refactoring time comes around...
...and there will always be refactoring!
To overlap my two thought streams:
I wonder what the Marconi API implemented in Yesod would look like...?
Sounds like a hackday project to me. :)
Watching We're Doing it All Wrong by Paul Phillips.
His comments on for-loops still being taught as the fundamental unit of iteration or operations on collections resonated with me pretty strongly. map
, filter
, fold
, and friends get us much closer to expressing what we mean.
Let's get more functional.
I tinkered with Yesod today. I'd love to write out an alternative implementation of the Marconi project (what I'm currently working on) that is based on Haskell rather than Python. Here's what I pulled together today: routes definition in Yesod!
```haskell {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE QuasiQuotes #-} {-# LANGUAGE TemplateHaskell #-} {-# LANGUAGE TypeFamilies #-} import Data.Text (Text) import Yesod
data App = App
mkYesod "App" [parseRoutes|
-- Fundamental routes: health, homedoc /v1 HomeR GET /v1/health/#Text HealthR GET HEAD
-- Dealing with queues /v1/queues/#Text QueuesR GET /v1/queues/#Text/#Text QueueItemR HEAD GET PUT DELETE /v1/queues/#Text/#Text/stats StatsR GET HEAD /v1/queues/#Text/#Text/metadata MetadataR GET PUT
-- Handling messages /v1/queues/#Text/#Text/messages MessagesR GET POST DELETE /v1/queues/#Text/#Text/messages/#Text MessageItemR GET HEAD DELETE PUT
-- Handling claims
/v1/queues/#Text/#Text/claims ClaimsR POST
/v1/queues/#Text/#Text/claims/#Text ClaimItemR GET PATCH DELETE
-- Admin API
/v1/shards ShardsR GET
/v1/shards/#Text ShardItemR GET HEAD PUT DELETE PATCH
|]
```
It's pretty cool how the routes are typed and how the routes definition is rather self-documenting.
I've also put together a simple implementation of the health resource.
```haskell getHealthR :: Handler () getHealthR Project = return
headHealthR :: Handler () headHealthR Project = getHealthR Project ```
That's all for now. Next up, I'll try and implement another one of the simple resources, perhaps attempting to handle query parameters.
Reading Haskell: Teaching Haskell in Academia and Industry (panel)
Disclaimer: I've yet to write any significant body of Haskell.
I was struck by the notion that most people that work on Haskell spend a lot of time iterating to get a program that doesn't type check to one that does. It reminds me a lot of Python cycles spent getting a program that doesn't pass unit tests to one that does.
This to me sounds like feedback even earlier in the process of development. Sometimes, I don't stop to write the unit tests before the implementation because I've found a good implementation flow. And then I go through a crash-debug-fix cycle for a little bit. After it works, I realize it would've been faster to just write the test in the first place. Trade-offs, ya know?
I wonder how much I'd be able to lean on the compiler to support me writing my ideas down first?
Reading The Compromiseless Reconciliation of I/O and Purity
The argument that I/O actions are themselves pure is interesting, and makes sense. When thinking about IO
as a series of instructions, it is true that the series of instructions used by the program each time will be the same.
I'm also pretty fond of the idea that Haskell is great at sequencing and composing these series of instructions.
Instructions as a data structure - very cool idea. Very nicely written.
I've taken a little break from writing about Haskell, since I've been so deeply involved in Python for my day job.
Still, my reading hasn't slowed down. I've come to adore LYAH. Now that I have a much better understanding of monads and how Haskell solves problems, it's been much easier to digest the book. All of the first 12 chapters make a lot of sense to me, and it was fun being excited about the power of Applicatives. My favorite example so far has been the instance of Applicative for []
. It leads to an expansion on all possible outcomes, which is awesome:
haskell
(++) <$> ["ha","heh","hmm"] <*> ["?","!","."]
["ha?","ha!","ha.","heh?","heh!","heh.","hmm?","hmm!","hmm."]
All it took was three lines to define the Applicative instance for [], too!
haskell
instance Applicative [] where
pure x = [x]
fs <*> xs = [f x | f <- fs, x <- xs]
I love that! The whole thinking behind typeclasses is very appealing to me.
Oh, which reminds me - the History of Haskell paper, though a bit long (55 pages, 2-columns), has been exactly what I've been looking for. It explains how all the features in Haskell, including various extensions, came to be, and what problem they solved. I highly recommend this paper if you're early in your Haskell education.
I've yet to make the time to write something in Haskell, but my learning and excitement hasn't come to a stop.
I had some time to spare while I was waiting for dinner to be ready. Dinner was being prepared by the local China Gate, and it would probably take on the order of 10 to 15 minutes. So I sat down in the lobby with a notepad in hand.
The topic on my thoughts: how could Haskell be incorporated into a modern CS curriculum? I decided to run as radically as I could with this, writing a rough draft of what such a curriculum might look like.
I won't make any arguments for or against functional programming here. I refer readers instead to papers like this one and this one, talks like this one or this one, and books like this one. This is an exercise in ideation.
Without further ado, let's begin:
Imagine this is the first time you're learning programming. You've never been exposed to mutation, to functions, to compiling, algorithms, or any of the details of architecture. Where do we start?
Four weeks of Haskell. Enough to get through the first few chapters of LYAH and be able to start thinking about recursion. End each week with a simple exercise, for example, upper-casing a list of strings, upper-casing only the first letter of every string that is longer than 2 characters - little things to build confidence. The Udacity Introduction to Computer Science course has many appropriate ideas for beginning level exercises.
With that introductory period out of the way, now's the time to show why computer science is relevant! Take the time to show case the areas: operating systems, networking, algorithms, programming languages, cryptography, architecture, hardware, and more. Make it relevant:
There are many applicable introductory exercises here that can set the pace for future courses.
This one, and the latter algorithms course on this "Top 10 List", deserve special attention.
Algorithms are fundamentally pure constructs. You give them a well-defined input, and receive a well-defined output
Take plenty of time to provide weekly exercises. Teaching sorting algorithms, trees, string matching algorithms, and more will be a delight here, I predict.
It's also a good time to introduce basic run-time analysis, e.g., Big O notation.
This is also a beautiful time to introduce QuickCheck in conjunction with invariants.
Very similar to the basic algorithms course, except now we teach students about some basic ways to organize data. Lists, vectors, trees, hash maps, and graphs - these should be enough to keep most students (and practitioners) well-equipped for years!
QuickCheck and frequent programming exercises will do well here.
If an advanced version of this course is desired, I highly recommend starting from here to brainstorm a variant: Purely Functional Data Structures
This can be very low-level (OSI network stack, TCP window buffering, etc.), it can be very high-level (HTTP, distributed systems), or some mix of the two.
I think the most important concepts students can come of this course with would be:
Teach them to ask - what is an operating system? How do I manage my resources?
It's worth surveying the concepts of: memory management, file systems, data persistence, concurrency, parallelism, process management, task scheduling, and possibly a bit more.
Great projects in this course include: 1) write your own shell, 2) write a simple, local task manager.
Let's talk types, functions, composition, and problem solving using different approaches. Ideally, such a course would come after learning how to design solutions to mid-sized programming challenges.
After that, have students write an interpreter for a Lisp.
More on: write your own language. This course should cover parsing, lexical analysis, type analysis, and the conversion from source to assembly.
This one's going to be fun. Unleash the power of equational reasoning to put together a course that runs through: graph theory, network flow analysis, greedy algorithms, memoization, and more.
This would also be a great time to discuss how the price one pays for purity in regards to asymptotic performance, and how to overcome that, if necessary.
Also, an extended treatment of algorithmic analysis in the presence of laziness would be valuable here.
Really, this should be higher up in this list, and a very early course.
The goal of this course is to come out of it knowing how to:
It's a great time to teach some basic ideas for testing, how to experiment with the REPL, and how to take advantage of the type system for simple things.
On a more social level, it's a wonderful time to also guide students towards collaborative design, e.g., how to work together and make it fun and efficient.
This could be a very fun course. It affords the opportunity to allow students to run wild with simulations and experiments of their choosing, while learning about what it means to do high-performance computing in a functional language.
Given that, it should teach some basic tools that will be applicable to most or all projects. How does one benchmark well? What are the evils of optimization? What is over-optimization? When is optimization needed? What tools exist right now to harness parallelism in Haskell (Repa, Accelerate, etc.)? When is data distribution needed? Why is parallelism important? How is parallelism different than concurrency? How can the type system be wielded to help keep units (km/s, etc.) consistent across calculations?
I'd advocate for letting students choose their own projects built in parallel with the course taking place. A simple default is to optimize the much-lauded matrix multiply to larger and larger scales (distributed even, if they want to go so far!). Writing a collision detection engine for a simple game would be pretty interesting, as well.
These topics are absent from my list for no reason other than I didn't think of them 'til the list was done and articulated. There's so much one can learn and apply at the level of abstraction that computer science (and mathematics) affords that we could specialize further and further. For my own sake, I'm setting a limit. :)
I've just brain-stormed a curriculum in Haskell. There's a lot of details missing, but it's a working draft.
There's also other things to consider, beyond the technical aspects. Consider the social aspects. How we teach students to work together? How do we keep learning engaging and fun? How do we help students connect to the greater community of developers that exist outside of academia? How do we keep the lessons relevant to the lives that they lead? How do we encourage them to pursue their passions?
**Cross posted from my blog: Read, Review, Refactor
Took me some time to figure out how to do a basic HTTP request in Haskell. Using http-client made it pretty easy, ultimately. Here's the result:
```haskell import Network.HTTP.Client
main = do req <- parseUrl "http://google.com" mgr <- newManager defaultManagerSettings httpLbs req mgr ```
Be warned: that'll output quite a bit of data to the console.
Fun, fun.
Seems there's quite a bit of power hiding behind the simple http-client interface: data streaming, TLS support, keep-alive, etc.
Watching: Brent Yorgey On Diagrams and Typeclassopedia
A few sections caught my eye.
"I think to be really successful with Haskell, you need to understand at least some of the materials on the Typeclassopedia..."
From here. I agree when Brent speaks on understanding the common abstractions. I've found that the more I learn to read and understand the abstractional conventions >>=
, >>
, <-
, fmap
, <*>
, <$>
, runThing
, and more, the more sense reading Haskell APIs makes to me.
On design patterns, and how they're fundamentally a manifestation of an anti-pattern in software design. I found this ironic, but I understood the why of it. Design patterns exist because the languages that we've been using today fail to capture the abstractions that we want to use. Iterators, facades, etc. - it all connected pretty strongly with me.
"Design patterns are just libraries that your language isn't expressive enough to write."
The next section that got me thinking was on Teaching Haskell.
"Was there a consistent hurdle?".
Applicative functors and monads seemed to give trouble. This has been my experience so far with the little bit of programming I've been able to do so far. Sometimes, I'll get stuck trying to get something out of a Context
, and what conventions exist for that. For example, using Data.Random
and finding out how to plug the results of generating a RVar [ByteString]
into a function that expects a [ByteString]
was tricky.
I love the thoughts on teaching students how to move between different levels of abstraction. This is something that I missed while going through school, and had to more or less rough it out while working on my CS degree.
"What do you think about Haskell as a general purpose language for teaching...?"
It seems that reasoning about time/space usage in Haskell is tricky. I can understand this. It would probably be difficult to convey some basic concepts on estimating the complexity of an algorithm in Haskell.
On the other hand, I think an unfortunate trade-off is made here. Haskell makes it easy to understand how to cleanly express algorithms and programs. I think this sacrifice in delaying the introduction of algorithmic complexity and space usage is worth it. I've also heard that Chris Okasaki does a great job of introducing how to reason about complexity in a lazy setting in his Purely Functional Data Structures.
Agree? Disagree? Fork this ThoughtStream or catch me on @cppcabrera. Share your thoughts!
A little shifting exercise today:
```haskell shiftR :: [a] -> [a] shiftR xs = last xs : init xs shiftR [] = []
shiftL :: [a] -> [a] shiftL (x:xs) = xs ++ [x] shiftL [] = []
shiftBy :: ([a] -> [a]) -> [a] -> Int -> [a] shiftBy f xs n = head $ drop n $ iterate f xs
shiftByL :: [a] -> Int -> [a] shiftByL = shiftBy shiftL
shiftByR :: [a] -> Int -> [a] shiftByR = shiftBy shiftR ```
What's the development process?
With a few terminals open:
I do this:
:l shift
:r
after each editOnce things were working, I moved to clean up the style using hlint.
Good future additions:
Infinitely sleeping, a little systems practice:
Fun with compression in Haskell: