Skip to content

Reflections 2024

github-actions[bot] edited this page Dec 23, 2024 · 83 revisions

2016 / 2018 / 2019 / 2020 / 2021 / 2022 / 2023 / 2024

Table of Contents

Day 1

Top / Prompt / Code / Standalone

Day 1 is always a Haskell warmup :)

One nice way to get both lists is to parse [(Int, Int)] and use unzip :: [(a,b)] -> ([a], [b])], getting a list of pairs into a pair of lists.

Once we have our two [Int]s, part 1 is a zip:

part1 :: [Int] -> [Int] -> Int
part1 xs ys = sum $ map abs (zipWith subtract xs ys)

Part 2 we can build a frequency map and then map a lookup:

import qualified Data.Map as M

part2 :: [Int] -> [Int] -> Int
part2 xs ys = sum $ map (\x -> x * M.findWithDefault 0 x freqMap) xs
  where
    freqMap :: M.Map Int Int
    freqMap = M.fromListWith (+) (map (,1) ys)

Day 1 Benchmarks

>> Day 01a
benchmarking...
time                 393.8 μs   (392.4 μs .. 394.9 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 393.0 μs   (392.4 μs .. 393.5 μs)
std dev              1.986 μs   (1.684 μs .. 2.403 μs)

* parsing and formatting times excluded

>> Day 01b
benchmarking...
time                 181.5 μs   (181.0 μs .. 182.3 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 182.2 μs   (181.9 μs .. 182.7 μs)
std dev              1.178 μs   (755.9 ns .. 1.950 μs)

* parsing and formatting times excluded

Day 2

Top / Prompt / Code / Standalone

Again a straightforward Haskell day. I have a utility function I use for a bunch of these:

countTrue :: (a -> Bool) -> [a] -> Int
countTrue p = length . filter p

So we can run countTrue over our list of [Int]. The predicate is:

import Data.Ix (inRange)

predicate :: [Int] -> Bool
predicate xs =
  all (inRange (1, 3)) diffies
    || all (inRange (1, 3) . negate) diffies
  where
    diffies = zipWith subtract xs (drop 1 xs)

It's a straightforward application of countTrue predicate for part 1. For part 2, we can see if any of the possibilities match the predicate.

part1 :: [[Int]] -> Int
part1 = countTrue predicate

part2 :: [[Int]] -> Int
part2 = countTrue \xs ->
  let possibilities = xs : zipWith (++) (inits xs) (tail (tails xs))
   in any predicate possibilities

inits [1,2,3] gives us [], [1], [1,2], and [1,2,3], and tail (tails xs) gives us [2,3], [3], and []. So we can zip those up to get [2,3], [1,3], and [2,3]. We just need to make sure we add back in our original xs.

This is probably the simplest way to write, but, there's something cute/recursive we can do using the list "monad" to generate all possibilities: for each x:xs, we can either "drop here" or "drop later":

tryDrops :: [a] -> [[a]]
tryDrops = \case
  [] -> [[]]
  x : xs -> xs : ((x :) <$> tryDrops xs)
        --  ^ drop here
        --        ^ drop later

And this simplifies part 2 significantly:

part2 :: [[Int]] -> Int
part2 = countTrue $ any predicate . tryDrops

Day 2 Benchmarks

>> Day 02a
benchmarking...
time                 49.05 μs   (48.35 μs .. 49.79 μs)
                     0.993 R²   (0.981 R² .. 0.999 R²)
mean                 49.39 μs   (48.18 μs .. 52.99 μs)
std dev              5.746 μs   (1.093 μs .. 10.17 μs)
variance introduced by outliers: 87% (severely inflated)

* parsing and formatting times excluded

>> Day 02b
benchmarking...
time                 425.5 μs   (424.0 μs .. 426.9 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 424.3 μs   (423.4 μs .. 426.0 μs)
std dev              3.680 μs   (2.733 μs .. 6.026 μs)

* parsing and formatting times excluded

Day 3

Top / Prompt / Code / Standalone

You can think of the whole thing is essentially a state machine / finite automata. For part 1 it's straightforward: chomp as many mul(x,y) as possible, summing the muls:

import qualified Control.Monad.Combinators as P
import qualified Text.Megaparsec as P
import qualified Text.Megaparsec.Char as P
import qualified Text.Megaparsec.Char.Lexer as PL

parseMul :: P.Parsec v String Int
parseMul = product <$> P.between "mul(" ")" (PL.decimal `P.sepBy` ",")

part1 :: Parsec v Int
part1 = sum <$> many (dropUntil parseMul)

-- | A utility parser combinator I have that skips until the first match
dropUntil :: P.Parsec e s end -> P.Parsec e s end
dropUntil x = P.try (P.skipManyTill P.anySingle (P.try x))

For part 2 the state machine has a "on or off" state: on the "off" state, search for the next don't. On the "on" state, search for the next mul and continue on, or the next don't and continue off.

part2 :: P.Parsec v String Int
part2 = sum <$> goEnabled
  where
    goDisabled = P.option [] . dropUntil $ "do()" *> goEnabled
    goEnabled = P.option [] . dropUntil $
      P.choice
        [ "don't()" *> goDisabled n
        , (:) <$> parseMul <*> goEnabled
        ]

Day 3 Benchmarks

>> Day 03a
benchmarking...
time                 1.173 ms   (1.164 ms .. 1.181 ms)
                     0.999 R²   (0.999 R² .. 1.000 R²)
mean                 1.179 ms   (1.170 ms .. 1.186 ms)
std dev              29.67 μs   (22.62 μs .. 37.85 μs)
variance introduced by outliers: 14% (moderately inflated)

* parsing and formatting times excluded

>> Day 03b
benchmarking...
time                 1.827 ms   (1.809 ms .. 1.860 ms)
                     0.999 R²   (0.998 R² .. 1.000 R²)
mean                 1.792 ms   (1.786 ms .. 1.809 ms)
std dev              28.94 μs   (18.52 μs .. 51.00 μs)

* parsing and formatting times excluded

Day 4

Top / Prompt / Code / Standalone

Here we are matching "stencils" across different windows, so it's always fun to use comonads for this. That's because extend :: (w a -> b) -> w a -> w b lets you automagically convert a function on windows (the w a -> b) to a w a -> w b, the application across every window.

First we parse our input into a Map Point Char, where data V2 a = V2 a a (from the linear library), a tuple type with the correct Num instance that I use for most of these.

Our stencils are (centered around 0,0):

import Linear.V2 (V2(..), (*^))

xmas :: [Map (V2 Int) Char]
xmas =
    [ M.fromList [(i *^ step, x) | (i, x) <- zip [0 ..] "XMAS"]
    | d <- [V2 1 0, V2 0 1, V2 1 1, V2 (-1) 1]
    , step <- [d, negate d]
    ]

crossMas :: [Map (V2 Int) Char]
crossMas =
    [ M.insert 0 'A' (diag1 <> diag2)
    | diag1 <- M.fromList . zip [V2 (-1) (-1), V2 1 1] <$> ["MS", "SM"]
    , diag2 <- M.fromList . zip [V2 1 (-1), V2 (-1) 1] <$> ["MS", "SM"]
    ]

Now some utility functions to wrap and unwrap our Map (V2 Int) Char into a Store (V2 Int) (Maybe Char) store comonad, so we can use its Comonad instance:

mapToStore :: (Ord k, Num k) => Map k a -> Store k (Maybe a)
mapToStore mp = store (`M.lookup` mp) 0

mapFromStore :: Num k => Set k -> Store k a -> Map k a
mapFromStore ks = experiment \x -> M.fromSet (+ x) ks

Now a function to check if a stencil matches a neighborhood:

checkStencil :: Num k => Map k a -> Store k (Maybe a) -> Bool
checkStencil mp x = all (\(p, expected) -> peeks (+ p) x == Just expected) (M.toList mp)

countWindowMatches :: (Num k, Eq a) => [Map k a] -> Store k (Maybe a) -> Int
countWindowMatches mps x = length $ filter (`matchMap` x) mps

Now we have a Store k (Maybe a) -> Int, which takes a window and gives an Int that is the number of stencil matches at the window origin. The magic of comonad is that now we have extend stencils :: Store k (Maybe a) -> Store k Int, which runs that windowed function across the entire map.

countMatches :: [Map (V2 Int) a] -> Map (V2 Int) Char -> Int
countMatches stencils xs =
    sum . mapFromStore (M.keysSet xs) . extend (matchAnyMap stencils) . mapToStore $ xs

part1 :: Map (V2 Int) Char -> Int
part1 = countMatches xmas

part2 :: Map (V2 Int) Char -> Int
part2 = countMatches crossMas

Day 4 Benchmarks

>> Day 04a
benchmarking...
time                 37.83 ms   (37.05 ms .. 38.53 ms)
                     0.998 R²   (0.991 R² .. 1.000 R²)
mean                 38.29 ms   (37.96 ms .. 39.21 ms)
std dev              1.043 ms   (345.7 μs .. 1.881 ms)

* parsing and formatting times excluded

>> Day 04b
benchmarking...
time                 22.07 ms   (21.94 ms .. 22.20 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 22.06 ms   (21.99 ms .. 22.13 ms)
std dev              156.2 μs   (117.2 μs .. 204.2 μs)

* parsing and formatting times excluded

Day 5

Top / Prompt / Code / Standalone

This one lends itself pretty nicely to basically topologically sorting each page list according to the graph of "X preceeds Y" edges.

If we have a list of (Int, Int) rules, we can build a graph where the nodes are the page numbers and the edges are "X preceeds Y".

Then for each page list, we can filter that graph for only the nodes in that page list, and then toposort it:

import qualified Data.Graph.Inductive as G

sortByRules :: [(Int, Int)] -> [Int] -> [Int]
sortByRules rules = \xs ->
    G.topsort . G.nfilter (`S.member` S.fromList xs) $ ruleGraph
  where
    ruleGraph :: G.Gr () ()
    ruleGraph =
      G.mkUGraph
        (nubOrd $ foldMap (\(x,y) -> [x,y]) rules)
        rules

part1 :: [(Int, Int)] -> [[Int]] -> Int
part1 rules pages = sum
    [ middleVal orig
    | orig <- pages
    , orig == sorter orig
    ]
  where
    sorter = sortByRules rules

part2 :: [(Int, Int)] -> [[Int]] -> Int
part2 rules pages = sum
    [ middleVal sorted
    | orig <- pages
    , let sorted = sorter orig
    , orig /= sorted
    ]
  where
    sorter = sortByRules rules

We write sortByRules with a lambda closure (and name sorters) to ensure that the graph is generated only once and then the closure re-applied for every page list.

One cute way to find the middle value is to traverse the list twice at the same time "in parallel", but one list twice as quickly as the other:

middleVal :: [a] -> a
middleVal xs0 = go xs0 xs0
  where
    go (_:xs) (_:_:ys) = go xs ys
    go (x:_) _ = x

Day 5 Benchmarks

>> Day 05a
benchmarking...
time                 18.31 ms   (18.13 ms .. 18.47 ms)
                     0.999 R²   (0.999 R² .. 1.000 R²)
mean                 18.42 ms   (18.27 ms .. 18.57 ms)
std dev              359.7 μs   (219.9 μs .. 538.4 μs)

* parsing and formatting times excluded

>> Day 05b
benchmarking...
time                 17.68 ms   (17.64 ms .. 17.72 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 17.69 ms   (17.65 ms .. 17.72 ms)
std dev              93.29 μs   (64.40 μs .. 139.0 μs)

* parsing and formatting times excluded

Day 6

Top / Prompt / Code / Standalone

This one features a common staple of Advent of Code: the 2D grid. In this case we can parse it as a Set Point of boulders and an initial starting Point, with type Point = V2 Int from the linear library, which has good Num, Functor, Foldable instances etc.

Then the (possibly infinite) stepping function becomes:

import Data.Finite
import Linear.V2
import qualified Data.Set as S
import qualified Data.Vector.Sized as SV

type Point = V2 Int

stepInDir :: Finite 4 -> Point
stepInDir = SV.index $ SV.fromTuple (V2 0 (-1), V2 1 0, V2 0 1, V2 (-1) 0)

stepPath :: Int -> S.Set Point -> Point -> [(Point, Finite 4)]
stepPath maxCoord boulders = takeWhile inBounds . iterate go . (,0)
  where
    go (x, d)
      | x' `S.member` boulders = (x, d + 1)
      | otherwise = (x', d)
      where
        x' = x + stepInDir d
    inBounds = all (inRange (0, maxCoord))

part1 :: Set Point -> Point -> Int
part1 boulders = S.size . S.fromList . map fst . stepPath maxCoord boulders
  where
    maxCoord = maximum (foldMap toList boulders)

Here I use Finite 4 to give a cyclic type I can repeatedly rotate, and look up a single step in that direction from 4-vector. In my actual code I use a data type data Dir = North | East | South | West that is essentially the same thing.

For part 2 we can just try to insert new boulders along the original route and count the boulders that give loops. We can use tortoise and hare to do loop detection.

hasLoop :: Eq a => [a] -> Bool
hasLoop xs0 = go xs0 (drop 1 xs0)
  where
    go (x:xs) (y:_:ys) = x == y || go xs ys
    go _ _ = False

part2 :: Set Point -> Point -> Int
part2 boulders p0 = length . filter goodBoulder . nubOrd $ stepPath maxCoord boulders
  where
    maxCoord = maximum (foldMap toList boulders)
    goodBoulder p = p /= p0 && hasLoop (stepPath maxCoord (S.insert p boulders) p)

Overall runs in about 1 second on my machine. You could optimize it a bit by jumping directly to the next boulder. Basically you'd keep a map of x to the y's of all boulders in that column so you can move vertically, and then a map of y to the x's of all boulders in that row so you can move horizontally.

collapseAxes :: Foldable f => f Point -> V2 (Map Int (Set Int))
collapseAxes = foldl' (flip addAxesMap) mempty

addAxesMap :: Point -> V2 (Map Int (Set Int)) -> V2 (Map Int (Set Int))
addAxesMap (V2 x y) (V2 xMaps yMaps) =
  V2
    (M.insertWith (<>) x (S.singleton y) xMaps)
    (M.insertWith (<>) y (S.singleton x) yMaps)

slideAxes :: V2 (Map Int (Set Int)) -> Point -> Finite 4 -> Maybe Point
slideAxes (V2 xMap yMap) (V2 x y) = SV.index $ SV.fromTuple
  ( S.lookupLT y (M.findWithDefault mempty x xMap) <&> \y' -> V2 x (y' + 1)
  , S.lookupGT x (M.findWithDefault mempty y yMap) <&> \x' -> V2 (x' - 1) y
  , S.lookupGT y (M.findWithDefault mempty x xMap) <&> \y' -> V2 x (y' - 1)
  , S.lookupLT x (M.findWithDefault mempty y yMap) <&> \x' -> V2 (x' + 1) y
  )

stepPath' :: V2 (Map Int (Set Int)) -> Point -> [(Point, Finite 4)]
stepPath' as = unfoldr go . (,0)
  where
    go (p, d) = do
      p' <- slideAxes as p d
      pure ((p', d + 1), (p', d + 1))

part2' :: Set Point -> Point -> Int
part2' boulders p0 = length . filter goodBoulder . nubOrd $ stepPath maxCoord boulders
  where
    maxCoord = maximum (foldMap toList boulders)
    axesMap0 = collapseAxes boulders
    goodBoulder p = p /= p0 && hasLoop (stepPath' (addAxesMap p axesMap0) p)

This is cuts the time by about 30x.

Day 6 Benchmarks

>> Day 06a
benchmarking...
time                 1.452 ms   (1.432 ms .. 1.470 ms)
                     0.999 R²   (0.998 R² .. 0.999 R²)
mean                 1.448 ms   (1.440 ms .. 1.462 ms)
std dev              37.35 μs   (24.50 μs .. 53.82 μs)
variance introduced by outliers: 14% (moderately inflated)

* parsing and formatting times excluded

>> Day 06b
benchmarking...
time                 36.06 ms   (35.95 ms .. 36.17 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 35.89 ms   (35.69 ms .. 35.99 ms)
std dev              290.5 μs   (156.0 μs .. 433.8 μs)

* parsing and formatting times excluded

Day 7

Top / Prompt / Code / Standalone

This one works out well as a list monad based search. Essentially you are picking operations where:

targ == (x ? y) ? z

and if those ? operations induce a list monad split, you can then search all of the possible choices:

checkEquation :: [Int -> Int -> Int] -> Int -> [Int] -> Bool
checkEquation ops targ xs = targ `elem` foldl1M branchOnOp xs
  where
    branchOnOp a b = map (\f -> f a b) ops

Then you can do checkEquation [(+),(*)] for part 1 and checkEquation [(+),(*),cat] for part 2.

However, it is kind of helpful to work backwards from the target to see if you can get the initial number. For example, in 292: 11 6 16 20, you can eliminate * as an option for the final operation right off the bat.

So really, you can rephrase the problem as:

x == y ? (z ? targ)

where ? are the inverse operations, but you have some way to easily eliminate operations that don't make sense.

checkBackEquation :: [Int -> Int -> Maybe Int] -> Int -> [Int] -> Bool
checkBackEquation unOps targ (x:xs) = x `elem` foldrM branchOnUnOp targ xs
  where
    branchOnUnOp a b = mapMaybe (\f -> f a b) unOPs

And our un-ops are:

unAdd :: Int -> Int -> Maybe Int
unAdd x y = [y - x | y >= x]

unMul :: Int -> Int -> Maybe Int
unMul x y = [y `div` x | y `mod` x == 0]

unCat :: Int -> Int -> Maybe Int
unCat x y = [d | m == x]
  where
    pow = length . takeWhile (< x) $ iterate (* 10) 1
    (d, m) = y `divMod` (10 ^ pow)

So part 1 is checkBackEquation [unAdd, unMul] and part 2 is checkBackEquation [unAdd, unMul, unCat].

Timing-wise, moving from forwards to backwards brought my times for part 2 from 380ms to 1.2ms.

Day 7 Benchmarks

>> Day 07a
benchmarking...
time                 685.2 μs   (680.2 μs .. 692.3 μs)
                     0.989 R²   (0.975 R² .. 0.999 R²)
mean                 723.5 μs   (701.6 μs .. 756.7 μs)
std dev              94.38 μs   (57.31 μs .. 128.7 μs)
variance introduced by outliers: 84% (severely inflated)

* parsing and formatting times excluded

>> Day 07b
benchmarking...
time                 1.260 ms   (1.258 ms .. 1.262 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 1.259 ms   (1.258 ms .. 1.260 ms)
std dev              3.710 μs   (2.848 μs .. 4.910 μs)

* parsing and formatting times excluded

Day 8

Top / Prompt / Code / Standalone

Mostly straightforward Haskell, building up the set of all antinodes by iterating over every pair of antennae. The main thing we parameterize over is the way of generating the antinode points from a given pair of locations.

makeAntinodes :: Eq a => Map Point a -> (Point -> Point -> [Point]) -> Set Point
makeAntinodes mp genPts = S.fromList do
  (p1, c1) <- M.toList mp
  (p2, c2) <- M.toList mp
  guard $ p1 /= p2 && c1 == c2
  genPts p1 p2

day08 :: (Point -> Point -> [Point]) -> Map Point Char -> Int
day08 stepper mp = S.size $
    makeAntinodes ants \p1 p2 ->
      takeWhile (`S.member` allPoints) $ stepper p1 p2
  where
    allPoints = M.keysSet mp
    ants = M.filter (/= '.') mp

day08a :: Map Point Char -> Int
day08a = day08 \p1 p2 -> [p2 + p2 - p1]

day08b :: Map Point Char -> Int
day08b = day08 \p1 p2 -> iterate (+ (p2 - p1)) p2

Day 8 Benchmarks

>> Day 08a
benchmarking...
time                 590.0 μs   (587.8 μs .. 592.0 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 585.1 μs   (583.2 μs .. 586.6 μs)
std dev              5.404 μs   (4.505 μs .. 6.494 μs)

* parsing and formatting times excluded

>> Day 08b
benchmarking...
time                 990.5 μs   (987.5 μs .. 996.0 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 994.3 μs   (991.8 μs .. 997.7 μs)
std dev              9.220 μs   (8.039 μs .. 11.41 μs)

* parsing and formatting times excluded

Day 9

Top / Prompt / Code / Standalone

Both of these today involve consuming queues, but the nature of the queues are different. For part 1, we consume two queues: the queue of gaps from left to right, and the queue of files from right to left. For part 2, we consume the queue of file blocks from right to left.

We can actually consume the queues in both cases directly into their checksum without going through an intermediate structure, which is kind of convenient too.

First, let's parse the list of numbers into a usable state: for gaps, an IntMap of positions to gap sizes, and for file blocks, an IntMap of positions to id's and fids.

toDiskState :: [a] -> [Int] -> (IntMap Int, IntMap (a, Int))
toDiskState fids =
      IM.mapEither splitter
    . IM.fromList
    . snd
    . mapAccumL go 0
    . zip (intersperse Nothing (Just <$> fids))
  where
    go i (mfid, len) = (i + len, (i, (mfid, len)))
    splitter (mfid, len) = case mfid of
      Nothing -> Left len
      Just fid -> Right (fid, len)

For part 1, the behavior of the queues is non-trivial so it's helpful to write it using explicit recursion. The first queue is the queue of gaps (which we push-back on with a smaller gap length) and the second queue is the queue of reversed single (file slot index, file id) that we pop one-by-one. We also short-circuit to the end if our forward gap indices move past our backwards file indices.

fillGaps
  :: [(Int, Int)]   -- ^ list of (gap starting Index, gap length) left-to-right
  -> [(Int, Int)]   -- ^ list of (single file slot index, file id) right-to-left
  -> Int
fillGaps [] ends = sum $ map (uncurry (*)) ends
fillGaps _ [] = 0
fillGaps ((gapI, gapLen):gaps) ((endI, fid):ends)
  | endI > gapI -> gapI * fid + fillGaps (addBack gaps) ends
  | otherwise -> endI * fid + sum (map (uncurry (*)) ends)
  where
    addBack
      | gapLen == 1 = id
      | otherwise = ((gapI + 1, gapLen - 1) :)

part1 :: IntMap Int -> IntMap (Int, Int) -> Int
part1 gaps files =
  fillGaps
    (IM.toList gaps)
    [ (i, fid)
    | (i0, (fid, len)) <- IM.toDescList dsFiles
    , i <- take len $ iterate (subtract 1) (i0 + len - 1)
    ]

For part 2, our queue consumption is pretty typical, with no re-push or short-circuiting. We just move through every single file in reverse once, so it can be captured as a mapAccumL: a stateful map over the backwards file blocks, where state is the empty slot candidates.

moveBlock :: IntMap Int -> (Int, (Int, Int)) -> (IntMap Int, Int)
moveBlock gaps (i, (fid, fileLen)) = (gaps', hereContrib)
  where
    foundGap = find ((>= fileLen) . snd) . IM.toAscList $ IM.takeWhileAntitone (< i) gaps
    hereContrib = fid * ((fileLen * (fileLen + 1)) `div` 2 + fileLen * (maybe i fst foundGap - 1))
    gaps' = case foundGap of
      Nothing -> gaps
      Just (gapI, gapLen) ->
        let addBack
              | gapLen > fileLen = IM.insert (gapI + fileLen) (gapLen - fileLen)
              | otherwise = id
         in addBack . IM.delete gapI $ gaps

part2 :: IntMap Int -> IntMap (Int, Int) -> Int
part2 gaps files = sum . snd . mapAccumL moveBlock gaps $ IM.toDescList files

Day 9 Benchmarks

>> Day 09a
benchmarking...
time                 6.924 ms   (6.781 ms .. 7.068 ms)
                     0.986 R²   (0.972 R² .. 0.997 R²)
mean                 7.307 ms   (7.129 ms .. 7.627 ms)
std dev              668.0 μs   (424.9 μs .. 912.4 μs)
variance introduced by outliers: 53% (severely inflated)

* parsing and formatting times excluded

>> Day 09b
benchmarking...
time                 16.25 ms   (16.15 ms .. 16.33 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 16.27 ms   (16.23 ms .. 16.31 ms)
std dev              96.57 μs   (73.62 μs .. 132.0 μs)

* parsing and formatting times excluded

Day 10

Top / Prompt / Code / Standalone

A lot of times in Haskell, two problems end up having the same algorithm, just with a different choice of Monoid. This puzzle is a good example of that.

We can do a simple DFS and collect all 9's into a monoid:

gatherNines :: Monoid m => (Point -> m) -> Map Point Int -> Point -> m
gatherNines f mp = go 0
  where
    go x p
      | x == 9 = f p
      | otherwise =
          foldMap (go (x+1)) . M.keys . M.filter (== (x+1)) $ mp `M.restrictKeys` neighbs
      where
        neighbs = S.fromList $ (p +) <$> [V2 0 (-1), V2 1 0, V2 0 1, V2 (-1) 0]

For part 1 the monoid is Set Point (the unique 9's) and for part 2 the monoid is Sum Int (number of paths)

solve :: Monoid m => (Point -> m) -> (m -> Int) -> Map Point Int -> Int
solve gather observe mp =
    sum . map (observe . gatherNines gather mp) . M.keys $ M.filter (== 0) mp

part1 :: Map Point Int -> Int
part1 = solve S.singleton S.size

part2 :: Map Point Int -> Int
part2 = solve (const (Sum 1)) getSum

Day 10 Benchmarks

>> Day 10a
benchmarking...
time                 4.814 ms   (4.787 ms .. 4.843 ms)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 4.824 ms   (4.801 ms .. 4.852 ms)
std dev              78.24 μs   (54.79 μs .. 116.7 μs)

* parsing and formatting times excluded

>> Day 10b
benchmarking...
time                 4.727 ms   (4.713 ms .. 4.753 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 4.736 ms   (4.726 ms .. 4.752 ms)
std dev              37.76 μs   (28.79 μs .. 49.63 μs)

* parsing and formatting times excluded

Day 11

Top / Prompt / Code / Standalone

Today's "one trick" seems to be realizing that the actual ordered "list" is a red herring: a number's progression doesn't depend on any of its neighbors or ordering. So what we really have is not a list, but a multi-set. Stepping the multiset through 75 iterations is very efficient --- shows you what you gain when you use the correct data structure to represent the state!

freqs :: [Int] -> IntMap Int
freqs = IM.fromListWith (+) . map (,1)

stepMap :: IntMap Int -> IntMap Int
stepMap mp = IM.unionsWith (+)
  [ (* n) <$> freqs (step x)
  | (x, n) <- IM.toList mp
  ]

step :: Int -> [Int]
step c
  | c == 0 = [1]
  | even pow = let (a, b) = c `divMod` (10 ^ (pow `div` 2)) in [a, b]
  | otherwise = [c * 2024]
  where
    pow = length . takeWhile (<= x) $ iterate (* 10) 1

part1 :: [Int] -> Int
part1 = sum . (!! 25) . iterate stepMap . freqs

part2 :: [Int] -> Int
part2 = sum . (!! 75) . iterate stepMap . freqs

My original reflections/write-up used data-memocombinators, but after some thought I believe that the frequency map approach is the most natural.

Day 11 Benchmarks

>> Day 11a
benchmarking...
time                 593.6 μs   (592.6 μs .. 594.4 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 593.3 μs   (592.4 μs .. 594.6 μs)
std dev              3.930 μs   (2.585 μs .. 5.749 μs)

* parsing and formatting times excluded

>> Day 11b
benchmarking...
time                 45.70 ms   (45.39 ms .. 46.05 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 44.66 ms   (44.05 ms .. 44.97 ms)
std dev              887.1 μs   (525.0 μs .. 1.377 ms)

* parsing and formatting times excluded

Day 12

Top / Prompt / Code / Standalone

First of all, let's assume we had a function that took a set and found all contiguous regions of that set:

contiguousRegions :: Set Point -> [Set Point]

Now we can take a Map Point a and then assume a map of a's to all of the contiuous regions:

regions :: Ord a => Map Point a -> Map a [Set Point]
regions mp =
  contiguousRegions
    <$> M.fromListWith (<>) [ (x, S.singleton p) | (p, x) <- M.toList mp ]

Now it helps to take a region and create four sets: the first, all of the region's external neighbors to the north, the second, all of the region's external enghbors to the west, then south, then east, etc.:

neighborsByDir :: Set Point -> [Set Point]
neighborsByDir pts = neighborsAt <$> [V2 0 1, V2 1 0, V2 0 (-1), V2 (-1) 0]
  where
    neighborsAt d = S.map (+ d) pts `S.difference` pts

Now part 1 basically is the size of all of those points, and part 2 is the number of contiguous regions of those points:

solve :: Ord a => (Set Point -> Int) -> Map Point a -> Int
solve countFences mp = sum
    [ S.size region * countFences dirRegion
    | letterRegions <- regions mp
    , region <- letterRegions
    , dirRegion <- neighborsByDir region
    ]

part1 :: Ord a => Map Point a -> Int
part1 = solve S.size

part2 :: Ord a => Map Point a -> Int
part2 = solve (length . contiguousRegions)

Okay I'll admit that I had contiguousRegions saved from multiple years of Advent of Code. The actual source isn't too pretty, but I'm including it here for completion's sake. In my actual code I use set and non-empty set instead of list and set.

-- | Find contiguous regions by cardinal neighbors
contiguousRegions :: Set Point -> Set (NESet Point)
contiguousRegions = startNewPool S.empty
  where
    startNewPool seenPools remaining = case S.minView remaining of
      Nothing -> seenPools
      Just (x, xs) ->
        let (newPool, remaining') = fillUp (NES.singleton x) S.empty xs
         in startNewPool (S.insert newPool seenPools) remaining'
    fillUp boundary internal remaining = case NES.nonEmptySet newBoundary of
      Nothing -> (newInternal, remaining)
      Just nb -> fillUp nb (NES.toSet newInternal) newRemaining
      where
        edgeCandidates = foldMap' cardinalNeighbsSet boundary `S.difference` internal
        newBoundary = edgeCandidates `S.intersection` remaining
        newInternal = NES.withNonEmpty id NES.union internal boundary
        newRemaining = remaining `S.difference` edgeCandidates

Day 12 Benchmarks

>> Day 12a
benchmarking...
time                 44.45 ms   (42.17 ms .. 49.85 ms)
                     0.972 R²   (0.929 R² .. 1.000 R²)
mean                 43.86 ms   (42.95 ms .. 47.64 ms)
std dev              2.986 ms   (792.5 μs .. 5.596 ms)
variance introduced by outliers: 20% (moderately inflated)

* parsing and formatting times excluded

>> Day 12b
benchmarking...
time                 42.87 ms   (42.16 ms .. 43.47 ms)
                     0.999 R²   (0.999 R² .. 1.000 R²)
mean                 42.53 ms   (42.38 ms .. 42.77 ms)
std dev              363.0 μs   (176.7 μs .. 597.5 μs)

* parsing and formatting times excluded

Day 13

Top / Prompt / Code / Standalone

Day 13 Benchmarks

>> Day 13a
benchmarking...
time                 1.887 ms   (1.881 ms .. 1.893 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 1.891 ms   (1.887 ms .. 1.898 ms)
std dev              18.44 μs   (14.30 μs .. 26.49 μs)

* parsing and formatting times excluded

>> Day 13b
benchmarking...
time                 12.28 ms   (12.09 ms .. 12.54 ms)
                     0.998 R²   (0.996 R² .. 1.000 R²)
mean                 12.06 ms   (12.00 ms .. 12.22 ms)
std dev              246.8 μs   (174.1 μs .. 402.2 μs)

* parsing and formatting times excluded

Day 14

Top / Prompt / Code / Standalone

Day 14 Benchmarks

>> Day 14a
benchmarking...
time                 1.251 ms   (1.246 ms .. 1.255 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 1.251 ms   (1.244 ms .. 1.261 ms)
std dev              26.18 μs   (21.12 μs .. 32.54 μs)

* parsing and formatting times excluded

>> Day 14b
benchmarking...
time                 14.04 ms   (13.92 ms .. 14.25 ms)
                     0.999 R²   (0.999 R² .. 1.000 R²)
mean                 13.92 ms   (13.88 ms .. 14.02 ms)
std dev              148.2 μs   (69.32 μs .. 252.7 μs)

* parsing and formatting times excluded

Day 15

Top / Prompt / Code / Standalone

Day 15 Benchmarks

>> Day 15a
benchmarking...
time                 2.817 ms   (2.795 ms .. 2.832 ms)
                     0.999 R²   (0.998 R² .. 1.000 R²)
mean                 2.844 ms   (2.819 ms .. 2.912 ms)
std dev              142.4 μs   (45.19 μs .. 261.0 μs)
variance introduced by outliers: 32% (moderately inflated)

* parsing and formatting times excluded

>> Day 15b
benchmarking...
time                 3.903 ms   (3.894 ms .. 3.912 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 3.880 ms   (3.870 ms .. 3.890 ms)
std dev              31.83 μs   (26.58 μs .. 39.40 μs)

* parsing and formatting times excluded

Day 16

Top / Prompt / Code / Standalone

Day 16 Benchmarks

>> Day 16a
benchmarking...
time                 313.6 ms   (307.1 ms .. 319.8 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 314.8 ms   (313.9 ms .. 316.4 ms)
std dev              1.616 ms   (315.0 μs .. 2.207 ms)
variance introduced by outliers: 16% (moderately inflated)

* parsing and formatting times excluded

>> Day 16b
benchmarking...
time                 322.2 ms   (311.2 ms .. 332.4 ms)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 320.8 ms   (317.9 ms .. 324.1 ms)
std dev              3.921 ms   (2.314 ms .. 5.846 ms)
variance introduced by outliers: 16% (moderately inflated)

* parsing and formatting times excluded

Day 17

Top / Prompt / Code / Standalone

Day 17 Benchmarks

>> Day 17a
benchmarking...
time                 2.371 μs   (2.248 μs .. 2.531 μs)
                     0.981 R²   (0.962 R² .. 1.000 R²)
mean                 2.246 μs   (2.206 μs .. 2.338 μs)
std dev              211.0 ns   (82.53 ns .. 369.8 ns)
variance introduced by outliers: 87% (severely inflated)

* parsing and formatting times excluded

>> Day 17b
benchmarking...
time                 4.444 μs   (4.421 μs .. 4.463 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 4.441 μs   (4.427 μs .. 4.455 μs)
std dev              52.32 ns   (40.38 ns .. 70.82 ns)

* parsing and formatting times excluded

Day 18

Top / Prompt / Code / Standalone

Day 18 Benchmarks

>> Day 18a
benchmarking...
time                 6.592 ms   (6.559 ms .. 6.638 ms)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 6.546 ms   (6.517 ms .. 6.582 ms)
std dev              105.4 μs   (86.20 μs .. 140.9 μs)

* parsing and formatting times excluded

>> Day 18b
benchmarking...
time                 13.78 ms   (13.73 ms .. 13.83 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 13.80 ms   (13.78 ms .. 13.84 ms)
std dev              77.43 μs   (59.00 μs .. 101.2 μs)

* parsing and formatting times excluded

Day 19

Top / Prompt / Code / Standalone

Day 19 Benchmarks

>> Day 19a
benchmarking...
time                 283.5 ms   (280.3 ms .. 286.0 ms)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 294.9 ms   (289.7 ms .. 301.0 ms)
std dev              7.392 ms   (4.681 ms .. 9.426 ms)
variance introduced by outliers: 16% (moderately inflated)

* parsing and formatting times excluded

>> Day 19b
benchmarking...
time                 280.6 ms   (251.4 ms .. 302.0 ms)
                     0.997 R²   (0.980 R² .. 1.000 R²)
mean                 284.8 ms   (277.9 ms .. 289.1 ms)
std dev              7.324 ms   (1.051 ms .. 10.79 ms)
variance introduced by outliers: 16% (moderately inflated)

* parsing and formatting times excluded

Day 20

Top / Prompt / Code / Standalone

Day 20 Benchmarks

>> Day 20a
benchmarking...
time                 40.47 ms   (38.84 ms .. 41.22 ms)
                     0.995 R²   (0.979 R² .. 1.000 R²)
mean                 41.80 ms   (41.15 ms .. 43.37 ms)
std dev              1.933 ms   (950.7 μs .. 3.288 ms)
variance introduced by outliers: 12% (moderately inflated)

* parsing and formatting times excluded

>> Day 20b
benchmarking...
time                 393.2 ms   (380.9 ms .. 402.4 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 393.2 ms   (391.2 ms .. 396.0 ms)
std dev              2.621 ms   (1.132 ms .. 3.309 ms)
variance introduced by outliers: 19% (moderately inflated)

* parsing and formatting times excluded

Day 22

Top / Prompt / Code / Standalone

Day 22 Benchmarks

>> Day 22a
benchmarking...
time                 84.47 ms   (82.54 ms .. 88.08 ms)
                     0.998 R²   (0.993 R² .. 1.000 R²)
mean                 83.30 ms   (82.68 ms .. 84.79 ms)
std dev              1.625 ms   (543.2 μs .. 2.715 ms)

* parsing and formatting times excluded

>> Day 22b
benchmarking...
time                 5.653 s    (5.314 s .. 6.426 s)
                     0.998 R²   (NaN R² .. 1.000 R²)
mean                 6.488 s    (6.099 s .. 7.163 s)
std dev              642.2 ms   (58.08 ms .. 808.9 ms)
variance introduced by outliers: 23% (moderately inflated)

* parsing and formatting times excluded

Day 23

Top / Prompt / Code / Standalone

Day 23 Benchmarks

>> Day 23a
benchmarking...
time                 3.750 ms   (3.729 ms .. 3.780 ms)
                     0.998 R²   (0.995 R² .. 1.000 R²)
mean                 3.789 ms   (3.762 ms .. 3.836 ms)
std dev              127.3 μs   (65.75 μs .. 221.4 μs)
variance introduced by outliers: 16% (moderately inflated)

* parsing and formatting times excluded

>> Day 23b
benchmarking...
time                 48.57 ms   (48.41 ms .. 48.73 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 48.66 ms   (48.57 ms .. 48.78 ms)
std dev              200.5 μs   (137.2 μs .. 264.3 μs)

* parsing and formatting times excluded
Clone this wiki locally