The Southern Cross Stunts Trophy

Latest Times

DUP 1:38.55 15-11
SBR 1:37.85 15-11
MLR 1:36.70 15-11
SBR 1:38.00 15-11
ABU 1:39.45 15-11

The last race was held on the Cydonia track (get it!), raced with the Porsche March Indy car and ended at:

15-Nov-2009 20:00 (GMT-3)

Hakyll source

Here is the Hakyll 4-powered Haskell program which generates the Southern Cross site. This page contains literate Haskell, so you can directly paste the contents under each module definition into a .lhs file and compile it. For extra convenience, the raw source files of the modules will be provided to (here is the Main module). If you are completely new to Hakyll, the code will make more sense if you check the basic tutorials in Hakyll’s site beforehand.

Enough of prolegomena, let us get down to business. As ever, we start with imports and pragmas.

> {-# LANGUAGE OverloadedStrings #-}
> module Main where
> 
> import Control.Applicative ((<$>))
> import Data.Monoid (mappend, mempty, mconcat)
> import System.FilePath ( takeFileName, replaceExtension, takeExtension
>                        , takeBaseName, (</>))
> import Data.Ord (comparing)
> import Data.List (break, sortBy)
> import Control.Monad (forM_, mapM)
> import Data.Maybe (fromMaybe, fromJust, isJust)
> import qualified Data.Map as M (lookup, member, fromList)
> import Data.Time.Format (readTime, formatTime)
> import Data.Time.LocalTime (LocalTime (..))
> import Data.Time.Clock (UTCTime (..))
> import Data.Time.Format (defaultTimeLocale, parseTimeOrError)
> import Hakyll
> import Text.Pandoc.Options
> import Text.Highlighting.Kate (styleToCss, zenburn)
> import qualified RaceVars

Preliminaries

In what follows we will employ a slightly modified version of the Pandoc renderer and compiler exported by Hakyll so that they produce HTML5. We also define a variant of renderPandoc more convenient for Compiler monad pipelining, as well as a Pandoc compiler which can insert a table of contents.

> ourPandocWriterOptions :: WriterOptions
> ourPandocWriterOptions = defaultHakyllWriterOptions{ writerHtml5 = True }
> 
> tocPandocWriterOptions :: WriterOptions
> tocPandocWriterOptions = ourPandocWriterOptions
>     { writerTableOfContents = True
>     , writerTemplate = Just "$toc$\n$body$"
>     }
> 
> processWithPandoc :: Item String -> Compiler (Item String)
> processWithPandoc = processWithPandoc' False
> 
> processWithPandoc' :: Bool -> Item String -> Compiler (Item String)
> processWithPandoc' withToc =
>     renderPandocWith defaultHakyllReaderOptions
>         (if withToc then tocPandocWriterOptions else ourPandocWriterOptions)
> 
> pandocCompilerOfOurs :: Compiler (Item String)
> pandocCompilerOfOurs = pandocCompilerOfOurs' False
> 
> pandocCompilerOfOurs' :: Bool -> Compiler (Item String)
> pandocCompilerOfOurs' withToc =
>     pandocCompilerWith defaultHakyllReaderOptions $
>         if withToc then tocPandocWriterOptions else ourPandocWriterOptions

Then, we introduce baseCtx, the vanilla context used by nearly all pages of the site. Beyond Hakyll’s defaultContext, it also includes a few simple string fields with values defined in the RaceVars module, which provide key information about the current race. There is also importedCtx, which is part of our trick to better handle JavaScript.

> baseCtx :: Context String
> baseCtx = mconcat (map (uncurry constField) fieldDefs)
>     `mappend` importedCtx `mappend` defaultContext
>         where
>         fieldDefs =
>             [ ("trackName", RaceVars.trackName)
>             , ("carName", RaceVars.carName)
>             , ("trackFilePath", RaceVars.trackFilePath)
>             , ("startDateStr", RaceVars.startDateStr)
>             , ("deadlineStr", RaceVars.deadlineStr)
>             ]

Main rules

The main rules block begins with the run-of-the-mill stuff - images, stylesheets, scripts and templates.

> main :: IO ()
> main = hakyll $ do
> 
>     match "images/**" $ do
>         route   idRoute
>         compile copyFileCompiler
> 
>     match "style/*.css" $ do
>         route   idRoute
>         compile compressCssCompiler
> 
>     match "scripts/**" $ do
>         route   idRoute
>         compile copyFileCompiler
> 
>     match "templates/*" $
>         compile templateCompiler

Downloadable files are copied as well. The regex pattern is there so that the metadata files are not copied along with the downloadables.

>     match ("files/**" .&&. complement (fromRegex ".*\\.metadata")) $ do
>         route   idRoute
>         compile copyFileCompiler

The cfg directory contains configuration contingencies such as the robots and htaccess files. They are copied to the destination root. The htaccess file needs to be renamed, as Hakyll ignores hidden files.

>     match "cfg/htaccess" $ do
>         route   $ constRoute ".htaccess"
>         compile   copyFileCompiler
> 
>     match "cfg/*" $ do
>         route   $ gsubRoute "cfg/" (const "")
>         compile   copyFileCompiler

HTML in the notbody directory is meant to be substituted verbatim into a template; thus no route and a trivial compiler.

>     match "notbody/*.html" $
>         compile getResourceBody

The pagination of the news posts and building of the archive pages - content, index and HTML form - answers for a disproportionate share of the complexity of the program. With the three following rules we accomplish:

The first two rules are parameterized to leave open the possibility of reusing them with different sets of races or seasons. We will soon delve into the implementation of these archive rules.

>     let archParams = ArchiveParameters
>             { staticRouting = makeStaticPath' False
>             }
> 
>     baseArchiveRules archParams "archives/includes/season*.html"
> 
>     raceArchiveRules archParams
>         "archives/includes/race*.html" "files/packs/scr*.zip"
> 
>     newsAndArchiveInterfaceRules

Two simple rules for the outer shells of the pages with forms, written in HTML, and the PHP track viewer interface. templates/default.html is the main template of the site, containing the link to the main stylesheet, the navigation bar and other site-wide HTML bits. baseCtx, similarly, is our vanilla context. Note that we do not use relativizeUrls with the PHP page, as it would eat the URLs within PHP includes.

>     let htmlPages = fromList
>             [ "archives.html"
>             , "trackviewer.html"
>             ]
> 
>     match htmlPages $ do
>         route     idRoute
>         compile $ getResourceBody
>             >>= loadAndApplyTemplate "templates/default.html" baseCtx
>             >>= relativizeUrls
> 
>     match "old-trkview.php" $ do
>         route     idRoute
>         compile $ getResourceBody
>             >>= loadAndApplyTemplate "templates/default.html" baseCtx

Simple pages of the site are written in Markdown. The applyAsTemplate baseCtx is there so that the RaceVars fields are properly substituted. Note that it is necessary to escape the $ in fields within these Markdown pages; that is, \$field\$ instead of field.

>     let markdownPages = fromList
>             [ "links.markdown"
>             , "mods.markdown"
>             , "halloffame.markdown"
>             , "standings.markdown"
>             , "race.markdown"
>             , "comingsoon.markdown"
>             ]
> 
>     let markdownWithToc = fromList
>             [ "rules.markdown"
>             , "about.markdown"
>             ]
> 
>     match (markdownPages .||. markdownWithToc) $ do
>         route   $ setExtension "html"
>         compile $ do
>             addToc <- matches markdownWithToc <$> getUnderlying
>             pandocCompilerOfOurs' addToc
>             >>= applyAsTemplate baseCtx
>             >>= loadAndApplyTemplate "templates/default.html" baseCtx
>             >>= relativizeUrls

The front page compiler is similar, except that we load the most recent news posts through the virtual/news-includes item. We will see how that is generated in a little while. Another difference is that we only render the page with Pandoc after using applyAsTemplate. A pleasant consequence is that we do not need to escape the $ in the fields like in the other Markdown pages.

>     match "index.markdown" $ do
>         route   $ setExtension "html"
>         compile $ do
>             recentNews <- loadBody "virtual/news-includes"
>             getResourceBody
>                 >>= applyAsTemplate (constField "news" recentNews)
>                 >>= processWithPandoc
>                 >>= loadAndApplyTemplate "templates/default.html" baseCtx
>                 >>= relativizeUrls

As a finishing touch, the following rule generates this very page you are reading. We also include a raw copy of the source file, and a stylesheet for syntax highlighting generated by Pandoc.

>     let haskellSourceFiles :: [Identifier]
>         haskellSourceFiles =
>             [ "scr.lhs"
>             , "RaceVars.lhs"
>             ]
> 
>     let haskellSource = fromList haskellSourceFiles
> 
>     match haskellSource $ do
>         compile pandocCompilerOfOurs
> 
>     create ["hakyll.html"] $ do
>         route   $ idRoute
>         compile $ do
>             mTitleCtx <- fmap (constField "title") <$>
>                 getMetadataField "scr.lhs" "title"
>             source <- concatMap itemBody <$> mapM load haskellSourceFiles
>             makeItem source
>                 >>= processWithPandoc' True
>                 >>= loadAndApplyTemplate "templates/default.html"
>                         (fromMaybe mempty mTitleCtx `mappend` baseCtx)
>                 >>= relativizeUrls
> 
>     match haskellSource $ version "raw" $ do
>         route   $ customRoute (("files" </>) . takeFileName . toFilePath)
>         compile   copyFileCompiler
> 
>     create ["style/syntax.css"] $ do
>         route   $ idRoute
>         compile $ makeItem (compressCss . styleToCss $ zenburn)

JavaScript imports

That covers the main rules block; next come the details we skipped thus far. First of them is importedCtx and how it helps us managing our JavaScript. To use it in our pages, we fill an import metadata field with a comma-separated list of files (which, for this site, are placed in the notbody directory). We process that field using the following helper compiler. It generates a Pattern list of the files…

> getImportPattern :: Identifier -> Compiler Pattern
> getImportPattern id' = do
>     mImport <- fmap (map trim . splitAll ",") <$> getMetadataField id' "import"
>     return $ fromList . map fromFilePath . fromMaybe [] $ mImport

… which is then used by importedCtx to retrieve the contents of the files and insert them into a conveniently placed imported field in the default template. In our case, these imported files contain the <script> elements we wish to insert in the destination pages.

> importedCtx :: Context String
> importedCtx = field "imported" $ \it -> do
>     importPattern <- getImportPattern $ itemIdentifier it
>     concatMap itemBody <$> loadAll importPattern

News posts and archives

And now, as promised, we move on to the handling of news posts and archive pages (they overlap, as we have a news archive too). Let us begin with some helper functions. makeStaticPath is used to transform the relative paths of the identifiers of the archive includes (accessed through the JavaScript-powered form displayed in archives.html) to the absolute (as expected by relativizeUrls) paths of their static counterparts (accessed through the archives index or via the permanent links shown in archives.html once you load one of the includes).

> makeStaticPath :: FilePath -> FilePath
> makeStaticPath = makeStaticPath' True
> 
> makeStaticPath' :: Bool -> FilePath -> FilePath
> makeStaticPath' absolute =
>     (if absolute then ('/' :) else id)
>         . replaceAll "/includes/" (const "/static/") . fixExtension
>     where
>     fixExtension path
>         | takeExtension path == ".markdown" = replaceExtension path ".html"
>         | otherwise                         = path

Next, two time formats which will be used when generating file names for the news archive pages from publication dates. publishedTimeFormat is the format used in published fields across the pages; while ourTimeFormat is used in file names such as 2013-08-11-21-01-news.html.

> publishedTimeFormat :: String
> publishedTimeFormat = "%Y-%m-%dT%H:%M:%S%Z"
> 
> ourTimeFormat :: String
> ourTimeFormat = "%F-%H-%M"

The heart of the archive rules

And now, the main course. The two pairs of compilers below are the heart of the archive page generation. The arguments they take allow us to account for the differences between the processing of the news archives and that of the other archives. Starting with the first pair:

Note how the compilers are linked through the forStaticPageProcessing snapshot, and how we save the snapshot before applying the include-archive template, as it contains links which are irrelevant for the static version.

> baseArchiveCompiler :: Maybe (Identifier -> Compiler FilePath)
>                     -> Item String -> Compiler (Item String)
> baseArchiveCompiler staticMaker it = do
>     staticPath <- getUnderlying
>         >>= fromMaybe (return . makeStaticPath . toFilePath) staticMaker
>     let archiveIncludeCtx = constField "static" staticPath `mappend` baseCtx
>     return it
>         >>= saveSnapshot "forStaticPageProcessing"
>         >>= loadAndApplyTemplate
>                 "templates/include-archive.html" archiveIncludeCtx
>         >>= relativizeUrls
> 
> staticPageCompiler :: Maybe (Identifier -> Compiler String)
>                    -> Maybe String -> Compiler (Item String)
> staticPageCompiler titleMaker baseVersion = do
>     id' <- getUnderlying
>     let mcTitle = ($ id') <$> titleMaker
>     mTitle <- fromMaybe (return Nothing) $ fmap Just <$> mcTitle
>     let staticCtx = fromMaybe mempty (constField "title" <$> mTitle)
>             `mappend` baseCtx
>     loadSnapshot (setVersion baseVersion id') "forStaticPageProcessing"
>         >>= makeItem . itemBody
>         >>= loadAndApplyTemplate "templates/static-archive.html" staticCtx
>         >>= loadAndApplyTemplate "templates/default.html" staticCtx
>         >>= relativizeUrls

The other pair of compilers prepares the items of the archive interfaces:

These intrincate compilers for simple HTML elements may appear to be too complicated a solution when compared to, e.g., just using Blaze directly; the flip side, however, is an almost perfect separation of code and presentation.

> indexLinkCompiler :: (Identifier -> Compiler String)
>                   -> (Identifier -> Compiler FilePath)
>                   -> Compiler (Item String)
> indexLinkCompiler captionMaker hrefMaker = do
>     id' <- getUnderlying
>     caption <- captionMaker id'
>     staticPath <- hrefMaker id'
>     let archiveLinkCtx = bodyField "href"
>             `mappend` constField "caption" caption
>     makeItem staticPath
>         >>= loadAndApplyTemplate
>                 "templates/link-with-caption.html" archiveLinkCtx
> 
> optionCompiler :: (Identifier -> Compiler String)
>                -> (Identifier -> Compiler FilePath)
>                -> Compiler (Item String)
> optionCompiler captionMaker valMaker = do
>     id' <- getUnderlying
>     caption <- captionMaker id'
>     val <- valMaker id'
>     let archiveOptionCtx = bodyField "caption"
>             `mappend` constField "value" val
>     makeItem caption
>         >>= loadAndApplyTemplate
>                 "templates/archives-form-option.html" archiveOptionCtx

The three functions below are meant as defaults for the arguments of the two compilers juts above:

> standardHrefMaker :: Identifier -> Compiler FilePath
> standardHrefMaker = return . makeStaticPath . toFilePath
> 
> standardValMaker :: Identifier -> Compiler FilePath
> standardValMaker = return . takeFileName . toFilePath
> 
> standardCaptionMaker :: Identifier -> Compiler String
> standardCaptionMaker = \id' ->
>     fromMaybe (takeFileName . toFilePath $ id')
>         <$> getMetadataField id' "caption"

Basic archive pages

We can now enter the archive rules proper. baseArchiveRules prepares the set of items common to both race and season archives. It takes two arguments; namely, an ArchiveParameters which sets the function used to route static pages and prepare the links to it accordingly, and the pattern for the include version of the archive pages. Note that, given how the ArchiveParameters was set in the main block, the static pages, will end up in the archives/static directory.

The rule begins by producing include and static versions of the archive pages through baseArchiveCompiler and staticPageCompiler, which, as we mentioned above, are linked by snapshots.

> data ArchiveParameters = ArchiveParameters
>     { staticRouting :: String -> String
>     }
> 
> baseArchiveRules :: ArchiveParameters -> Pattern -> Rules ()
> baseArchiveRules params archPattern = do
> 
>     match archPattern $ do
>         route   idRoute
>         compile $ getResourceBody
>             >>= baseArchiveCompiler
>                     (Just $ return . ('/' :)
>                         . staticRouting params . toFilePath)
> 
>     let toStaticPath = staticRouting params . toFilePath
> 
>     match archPattern $ version "static" $ do
>         route   $ customRoute toStaticPath
>         compile $ staticPageCompiler Nothing Nothing

Next up are the various links and form options of the archive pages. With the heavy lifting done by the compilers defined above, the rules here are simple to write.

>     match archPattern $ version "index" $
>         compile $ indexLinkCompiler standardCaptionMaker
>             (return . ('/' :) . toStaticPath)
> 
>     match archPattern $ version "option" $
>         compile $ optionCompiler standardCaptionMaker
>             (return . takeFileName . toStaticPath)

Arranging the race packs

For the race archives, we also want to provide links and form options for the downloadable zip packages with the track and replays of the race. To generate them in tandem with the remainder of the race archive setup, we define a new rule which, first of all things, invokes the baseArchiveRules. The additional last argument in raceArchiveRules is the pattern for the race packs.

> raceArchiveRules :: ArchiveParameters -> Pattern -> Pattern -> Rules ()
> raceArchiveRules params sbPattern packPattern = do
> 
>     baseArchiveRules params sbPattern

The first added subrule is a very simple one for building the pack links.

>     match packPattern $ version "index" $
>         compile $
>             let captionMaker = return . const "Get Track and Replays"
>             in indexLinkCompiler captionMaker standardHrefMaker

As for the pack form options, we use a different caption maker in order to use the caption of the corresponding race archive page, for extra consistency. The trick here is to build a map which pairs the identifiers of the packs with those of the scoreboard/race archive pages, and then prepare a caption maker which retrieves the caption of the latter. The assumptions for this approach to work are that there is one race archive page for each race pack and that the chronological order of race pages and packs, as defined by the published metadata field in each case, coincide.

>     packToScoreboardMap <- do
>         let fauxChronological ids = map itemIdentifier <$>
>                 (chronological . map (flip Item $ ()) $ ids)
>         packIds <- getMatches (packPattern .&&. hasNoVersion)
>             >>= fauxChronological
>         sbIds <- getMatches (sbPattern .&&. hasNoVersion)
>             >>= fauxChronological
>         return $ M.fromList $ zip packIds sbIds
> 
>     match packPattern $ version "option" $
>         compile $
>             let captionMaker = standardCaptionMaker
>                     . fromJust . (`M.lookup` packToScoreboardMap)
>                     . setVersion Nothing
>             in optionCompiler captionMaker standardValMaker

The archive index entries for the races have the links for the race page and the corresponding zip package placed side by side, as specified by the race-links template. This compiler performs the final extra step of zipping the links produced by the other race archive compilers into an appropriate context for usage with that template.

> raceLinksCompiler :: Pattern -> Pattern -> Compiler String
> raceLinksCompiler sbPattern packPattern = do
>     scoreboardLinks <- loadAll (sbPattern .&&. hasVersion "index")
>     racePackLinks <- loadAll (packPattern .&&. hasVersion "index")
>     sortedSBLinks <- chronological scoreboardLinks
>     sortedPackLinks <- chronological racePackLinks
>     let makePairing sb pk = itemSetBody (itemBody sb, itemBody pk) sb
>         pairedLinks = zipWith makePairing sortedSBLinks sortedPackLinks
>         sbLinksCtx = field "scoreboardLink" $ return . fst . itemBody
>         packLinksCtx = field "packLink" $ return . snd . itemBody
>         raceLinksCtx = sbLinksCtx `mappend` packLinksCtx
>     raceLinksTpl <- loadBody "templates/race-links.html"
>     applyTemplateList raceLinksTpl raceLinksCtx pairedLinks

News posts and their pagination

The third and final archive rule begins with news post pagination. First, the posts themselves are processed. They are written as Markdown, but we defer the Pandoc rendering to the compilers of the destination pages; that is, index.markdown and the news archive pages. For that reason, there is no need to escape with $ the Hakyll fields within the news posts (or the Markdown templates we use here), just like we don’t need to do it either in index.markdown. Note how the template to use is chosen according to the presence of the headline metadata field. That is how we make the news posts with a highlighted first line, such as the final reports of a race. Finally, the applyAsTemplate is here to allow using the custom fields in baseCtx, just like in the Markdown pages we have seen early on.

> newsAndArchiveInterfaceRules :: Rules ()
> newsAndArchiveInterfaceRules = do
> 
>     match "news/*" $
>         compile $ do
>             metadata <- getUnderlying >>= getMetadata
>             let newsCtx = dateField "date" "%F %R" `mappend` baseCtx
>                 newsTemplate = maybe "templates/news.markdown"
>                     (const "templates/news-with-headline.markdown")
>                     (lookupString "headline" metadata)
>             getResourceBody
>                 >>= applyAsTemplate baseCtx
>                 >>= loadAndApplyTemplate newsTemplate newsCtx

Now, we get to what arguably is our most interesting stretch of code: the news pagination mechanism and the generation of the news archive pages (as we will see, they are inextricably linked). The pagination follows no rigid scheme based on dates or post counts; rather, the page boundaries are set through the metadata of the news posts in an entirely ad hoc manner. How do we achieve that? The first step is to load the metadata of the news posts inside the Rules monad.

>     newsMetadata <- getAllMetadata "news/*"

Our strategy will rely on a boundary metadata field, present in the news posts which should open a new page of the archive. The contents of the field do not concern us right now; later we will use them for things such as page titles. The boundary field is use to prepare newsBounds, a list of type [(UTCTime, Identifier)] whose elements are the boundary post identifier with a changed version and the corresponding publication time. It is prepared in the following steps:

>     newsBounds <- sortBy (flip $ comparing fst) <$>
>         sequence
>             [ (flip (,) $ setVersion (Just "newsArchive") id') <$>
>                 getItemUTC defaultTimeLocale id'
>             | (id', md) <- newsMetadata
>             , isJust (lookupString "boundary" md)
>             ]

Next, we separate newsBounds into a list of times and a list of identifiers. The newsArchiveIdents thus obtained will be used as identifiers of the news archive pages that will be created. Given our naming conventions for the pages, we will not be able to use the identifier paths for, e.g., routes or link building. The upside of this choice is the ability to access the metadata of the boundary news posts, specially the published and boundary fields.

>     let (newsBoundTimes, newsArchiveIdents) = unzip newsBounds

The pagination itself is performed by the newsPaginatorCompiler. It produces a list of items, one for each of the news archive identifiers plus an extra one for “recent news”; that is, news later than the last boundary post. Note the sorting with recentFirst and the usage of (>=) for date comparisons; they ensure the pages are grouped as we specified.

>     let groupWithBounds :: (a -> b) -> (b -> b -> Bool)
>                         -> [b] -> [a] -> [[a]]
>         groupWithBounds fProject fCompare bounds xs =
>             let preds = map ((. fProject) . fCompare) bounds
>                 op ps (xs, gs) =
>                     let (ys, g) = break ps xs
>                     in (ys, g : gs)
>             in uncurry (:) $ foldr op (xs, []) preds
> 
>     -- groupWithBounds id (<) [2,5] [1..7] == [[1,2],[3,4,5],[6,7]]
> 
>     let newsPaginatorCompiler :: Compiler [Item String]
>         newsPaginatorCompiler = do
>             posts <- loadAll ("news/*" .&&. hasNoVersion) >>= recentFirst
>             postDates <- mapM (getItemUTC defaultTimeLocale . itemIdentifier) posts
>             let postPairs = zip postDates posts
>                 groups = map (map snd) $
>                     groupWithBounds fst (>=) newsBoundTimes postPairs
>                 bodyGroups = map (concatMap itemBody) groups
>             return $ zipWith Item
>                 ("virtual/recent-news.markdown" : newsArchiveIdents) bodyGroups

We do not wish to have the paginator being executed once for every archive page, so we (ab)use an extra compilation rule to run it just once and store the items produced by the paginator in snapshots - except for the recent news one, which is “returned” in the usual manner by the compiler (and then loaded by the index.markdown compiler, as we have already seen). Note that the snapshots contain the whole items, and not just the bodies - their type is Compiler (Item (Item String)) - and that they are tagged with the file path of the corresponding news archive identifier.

>     create ["virtual/news-includes"] $
>         compile $ do
>             (recentNews : archiveItems) <- newsPaginatorCompiler
>             forM_ archiveItems $ \it ->
>                 makeItem it >>= saveSnapshot (toFilePath . itemIdentifier $ it)
>             makeItem $ itemBody recentNews

Now we can finally create the news archive pages. The first compiler retrieves the virtual/news-includes snapshot corresponding to the page being created and renders the Markdown with Pandoc. From there, it is smooth sailing; very similar to what we did before with the non-news archive pages. The helper function makeNewsArchivePath builds the destination path of the archive pages from the published field of the corresponding boundary news post. It will be used not only here but later, when preparing the archives interface page.

>     let makeNewsArchivePath :: Metadata -> FilePath
>         makeNewsArchivePath md =
>             ("archives/includes/" ++) . (++ "-news.html")
>             . (formatTime defaultTimeLocale ourTimeFormat
>                 :: LocalTime -> String)
>             . parseTimeOrError True defaultTimeLocale publishedTimeFormat
>             . fromMaybe (error
>                 "News archives: boundary post without publication date.") $
>             lookupString "published" md
> 
>     create newsArchiveIdents $ version "newsArchive" $ do
>         route   $ metadataRoute $ constRoute
>             . makeNewsArchivePath
>         compile $ do
>             let staticMaker = Just $ \id' ->
>                     (makeStaticPath . makeNewsArchivePath) <$> getMetadata id'
>             path <- toFilePath <$> getUnderlying
>             loadSnapshotBody "virtual/news-includes" path
>                 >>= processWithPandoc
>                 >>= baseArchiveCompiler staticMaker
> 
>     create newsArchiveIdents $ version "static" $ do
>         route   $ metadataRoute $ constRoute
>             . replaceAll "/includes/" (const "/static/") . makeNewsArchivePath
>         compile $
>             let titleMaker = Just $ \id' ->
>                     ("News archive: " ++) <$> getMetadataField' id' "boundary"
>             in staticPageCompiler titleMaker (Just "newsArchive")

We still have to deal with the links and options for the news archive pages. They are created in pretty much the same way than those of the other archive pages.

>     create newsArchiveIdents $ version "index" $
>         compile $
>             let captionMaker = \id' ->
>                     getMetadataField' id' "boundary"
>                 hrefMaker = \id' -> makeStaticPath
>                     . makeNewsArchivePath <$> getMetadata id'
>             in indexLinkCompiler captionMaker hrefMaker
> 
>     create newsArchiveIdents $ version "option" $
>         compile $
>             let captionMaker = \id' ->
>                     getMetadataField' id' "boundary"
>                 valMaker = \id' -> takeFileName
>                     . makeNewsArchivePath <$> getMetadata id'
>             in optionCompiler captionMaker valMaker

Archives form and index

At long last, it is time to bring all the small bits together and prepare the archives form HTML (which is injected into archives.html with JavaScript) and the archives index. The really important thing to notice here is the usage of rulesExtraDependencies. The links and form options corresponding to the news archive pages depend on their identifiers, but not on their content. Since these identifiers are created by metadata manipulation in the Rules monad, we have to state the dependency on the newsArchiveIdents explicitly; otherwise form and index would not be recompiled after changes to the metadata which sets the boundaries between the news archive pages.

Abstracting the implementation details, the two compilers are very similar. It is worth pointing out that the various fooOptions and fooLinks are fields of the pages being compiled; for that reason, we use applyAsTemplate in both cases. As for the actual templates, archives-form-select-list adds a blank option to the <select>, and simple-archive-link just wraps the index links with a <p> (except the race links of course, as they have their own template loaded in the raceLinksCompiler).

>     rulesExtraDependencies (map IdentifierDependency newsArchiveIdents) $ do
> 
>         match "archives/form.html" $ do
>             route   $ constRoute "includes/archives-form.html"
>             compile $ do
>                 raceList <- loadAll ("archives/includes/race*.html"
>                     .&&. hasVersion "option")
>                 seasonList <- loadAll ("archives/includes/season*.html"
>                     .&&. hasVersion "option")
>                 packList <- loadAll ("files/packs/scr*.zip"
>                     .&&. hasVersion "option")
>                 newsList <- loadAll . fromList $
>                     map (setVersion $ Just "option") newsArchiveIdents
>                 let processList its =
>                         recentFirst its
>                         >>= makeItem . concatMap itemBody
>                         >>= loadAndApplyTemplate
>                                 "templates/archives-form-select-list.html"
>                                 (bodyField "optionList")
>                         >>= return . itemBody
>                 races <- processList raceList
>                 seasons <- processList seasonList
>                 packs <- processList packList
>                 news <- processList newsList
>                 let archFormCtx = mconcat [ constField "raceOptions" races
>                                           , constField "seasonOptions" seasons
>                                           , constField "packOptions" packs
>                                           , constField "newsOptions" news
>                                           ]
>                 getResourceBody
>                     >>= applyAsTemplate archFormCtx
> 
>         match "archives/static-index.markdown" $ do
>             route   $ setExtension "html"
>             compile $ do
>                 seasonList <- loadAll ("archives/includes/season*.html"
>                     .&&. hasVersion "index")
>                 newsList <- loadAll . fromList $
>                     map (setVersion $ Just "index") newsArchiveIdents
>                 linkTpl <- loadBody "templates/simple-archive-link.html"
>                 let processList its =
>                         chronological its
>                         >>= applyTemplateList linkTpl (bodyField "link")
>                 seasons <- processList seasonList
>                 news <- processList newsList
>                 races <- raceLinksCompiler
>                             "archives/includes/race*.html"
>                             "files/packs/scr*.zip"
>                 let archIndexCtx = mconcat [ constField "raceLinks" races
>                                            , constField "seasonLinks" seasons
>                                            , constField "newsLinks" news
>                                            ]
>                 pandocCompilerOfOurs
>                     >>= applyAsTemplate archIndexCtx
>                     >>= loadAndApplyTemplate "templates/default.html" baseCtx
>                     >>= relativizeUrls

The RaceVars module

RaceVars (raw source) is a tiny auxiliary module which defines a few site-wide constants related to information about the current race. These constants are introduced in all appropriate places by the means of baseCtx, the default context for the site. It is crucial that the parameters defined here remains the same everywhere, lest we have the racers confused and unhappy by misleading information.

> module RaceVars where
> 
> trackName, carName :: String
> trackName = "Cydonia"
> carName = "Porsche March Indy"
> 
> trackFilePath :: FilePath
> trackFilePath = "/files/CYDONIA.TRK"
> 
> startDateStr, deadlineStr :: String
> startDateStr = "25-Oct-2009 20:00 (GMT -3)"
> deadlineStr = "15-Nov-2009 20:00 (GMT-3)"