-
Notifications
You must be signed in to change notification settings - Fork 0
Reflections 2024
2016 / 2018 / 2019 / 2020 / 2021 / 2022 / 2023 / 2024
- Day 1
- Day 2
- Day 3
- Day 4
- Day 5
- Day 6
- Day 7
- Day 8
- Day 9
- Day 10
- Day 11
- Day 12
- Day 13 (benchmark only)
- Day 14
- Day 15
- Day 16 (benchmark only)
- Day 17 (benchmark only)
- Day 18
- Day 19
- Day 20 (benchmark only)
- Day 21 (benchmark only)
- Day 22 (benchmark only)
- Day 23
- Day 24 (benchmark only)
- Day 25
Top / Prompt / Code / Standalone
Day 1 is always a Haskell warmup :)
One nice way to get both lists is to parse [(Int, Int)]
and use unzip :: [(a,b)] -> ([a], [b])]
, getting a list of pairs into a pair of lists.
Once we have our two [Int]
s, part 1 is a zip:
part1 :: [Int] -> [Int] -> Int
part1 xs ys = sum $ map abs (zipWith subtract xs ys)
Part 2 we can build a frequency map and then map a lookup:
import qualified Data.Map as M
part2 :: [Int] -> [Int] -> Int
part2 xs ys = sum $ map (\x -> x * M.findWithDefault 0 x freqMap) xs
where
freqMap :: M.Map Int Int
freqMap = M.fromListWith (+) (map (,1) ys)
>> Day 01a
benchmarking...
time 393.8 μs (392.4 μs .. 394.9 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 393.0 μs (392.4 μs .. 393.5 μs)
std dev 1.986 μs (1.684 μs .. 2.403 μs)
* parsing and formatting times excluded
>> Day 01b
benchmarking...
time 181.5 μs (181.0 μs .. 182.3 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 182.2 μs (181.9 μs .. 182.7 μs)
std dev 1.178 μs (755.9 ns .. 1.950 μs)
* parsing and formatting times excluded
Top / Prompt / Code / Standalone
Again a straightforward Haskell day. I have a utility function I use for a bunch of these:
countTrue :: (a -> Bool) -> [a] -> Int
countTrue p = length . filter p
So we can run countTrue
over our list of [Int]
. The predicate is:
import Data.Ix (inRange)
predicate :: [Int] -> Bool
predicate xs =
all (inRange (1, 3)) diffies
|| all (inRange (1, 3) . negate) diffies
where
diffies = zipWith subtract xs (drop 1 xs)
It's a straightforward application of countTrue predicate
for part 1. For
part 2, we can see if any of the possibilities match the predicate.
part1 :: [[Int]] -> Int
part1 = countTrue predicate
part2 :: [[Int]] -> Int
part2 = countTrue \xs ->
let possibilities = xs : zipWith (++) (inits xs) (tail (tails xs))
in any predicate possibilities
inits [1,2,3]
gives us []
, [1]
, [1,2]
, and [1,2,3]
, and tail (tails xs)
gives us [2,3]
, [3]
, and []
. So we can zip those up to get
[2,3]
, [1,3]
, and [2,3]
. We just need to make sure we add back in our
original xs
.
This is probably the simplest way to write, but, there's something
cute/recursive we can do using the list "monad" to generate all possibilities:
for each x:xs
, we can either "drop here" or "drop later":
tryDrops :: [a] -> [[a]]
tryDrops = \case
[] -> [[]]
x : xs -> xs : ((x :) <$> tryDrops xs)
-- ^ drop here
-- ^ drop later
And this simplifies part 2 significantly:
part2 :: [[Int]] -> Int
part2 = countTrue $ any predicate . tryDrops
>> Day 02a
benchmarking...
time 49.05 μs (48.35 μs .. 49.79 μs)
0.993 R² (0.981 R² .. 0.999 R²)
mean 49.39 μs (48.18 μs .. 52.99 μs)
std dev 5.746 μs (1.093 μs .. 10.17 μs)
variance introduced by outliers: 87% (severely inflated)
* parsing and formatting times excluded
>> Day 02b
benchmarking...
time 425.5 μs (424.0 μs .. 426.9 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 424.3 μs (423.4 μs .. 426.0 μs)
std dev 3.680 μs (2.733 μs .. 6.026 μs)
* parsing and formatting times excluded
Top / Prompt / Code / Standalone
You can think of the whole thing is essentially a state machine / finite
automata. For part 1 it's straightforward: chomp as many mul(x,y)
as
possible, summing the muls:
import qualified Control.Monad.Combinators as P
import qualified Text.Megaparsec as P
import qualified Text.Megaparsec.Char as P
import qualified Text.Megaparsec.Char.Lexer as PL
parseMul :: P.Parsec v String Int
parseMul = product <$> P.between "mul(" ")" (PL.decimal `P.sepBy` ",")
part1 :: Parsec v Int
part1 = sum <$> many (dropUntil parseMul)
-- | A utility parser combinator I have that skips until the first match
dropUntil :: P.Parsec e s end -> P.Parsec e s end
dropUntil x = P.try (P.skipManyTill P.anySingle (P.try x))
For part 2 the state machine has a "on or off" state: on the "off" state,
search for the next don't
. On the "on" state, search for the next mul
and
continue on, or the next don't
and continue off.
part2 :: P.Parsec v String Int
part2 = sum <$> goEnabled
where
goDisabled = P.option [] . dropUntil $ "do()" *> goEnabled
goEnabled = P.option [] . dropUntil $
P.choice
[ "don't()" *> goDisabled n
, (:) <$> parseMul <*> goEnabled
]
>> Day 03a
benchmarking...
time 1.173 ms (1.164 ms .. 1.181 ms)
0.999 R² (0.999 R² .. 1.000 R²)
mean 1.179 ms (1.170 ms .. 1.186 ms)
std dev 29.67 μs (22.62 μs .. 37.85 μs)
variance introduced by outliers: 14% (moderately inflated)
* parsing and formatting times excluded
>> Day 03b
benchmarking...
time 1.827 ms (1.809 ms .. 1.860 ms)
0.999 R² (0.998 R² .. 1.000 R²)
mean 1.792 ms (1.786 ms .. 1.809 ms)
std dev 28.94 μs (18.52 μs .. 51.00 μs)
* parsing and formatting times excluded
Top / Prompt / Code / Standalone
Here we are matching "stencils" across different windows, so it's always fun
to use comonads for this. That's because
extend :: (w a -> b) -> w a -> w b
lets you automagically convert a function
on windows (the w a -> b
) to a w a -> w b
, the application across every
window.
First we parse our input into a Map Point Char
, where data V2 a = V2 a a
(from the linear library), a tuple type with the correct Num
instance that
I use for most of these.
Our stencils are (centered around 0,0):
import Linear.V2 (V2(..), (*^))
xmas :: [Map (V2 Int) Char]
xmas =
[ M.fromList [(i *^ step, x) | (i, x) <- zip [0 ..] "XMAS"]
| d <- [V2 1 0, V2 0 1, V2 1 1, V2 (-1) 1]
, step <- [d, negate d]
]
crossMas :: [Map (V2 Int) Char]
crossMas =
[ M.insert 0 'A' (diag1 <> diag2)
| diag1 <- M.fromList . zip [V2 (-1) (-1), V2 1 1] <$> ["MS", "SM"]
, diag2 <- M.fromList . zip [V2 1 (-1), V2 (-1) 1] <$> ["MS", "SM"]
]
Now some utility functions to wrap and unwrap our Map (V2 Int) Char
into a
Store (V2 Int) (Maybe Char)
store comonad, so we can use its Comonad
instance:
mapToStore :: (Ord k, Num k) => Map k a -> Store k (Maybe a)
mapToStore mp = store (`M.lookup` mp) 0
mapFromStore :: Num k => Set k -> Store k a -> Map k a
mapFromStore ks = experiment \x -> M.fromSet (+ x) ks
Now a function to check if a stencil matches a neighborhood:
checkStencil :: Num k => Map k a -> Store k (Maybe a) -> Bool
checkStencil mp x = all (\(p, expected) -> peeks (+ p) x == Just expected) (M.toList mp)
countWindowMatches :: (Num k, Eq a) => [Map k a] -> Store k (Maybe a) -> Int
countWindowMatches mps x = length $ filter (`matchMap` x) mps
Now we have a Store k (Maybe a) -> Int
, which takes a window and gives an Int
that
is the number of stencil matches at the window origin. The magic of comonad
is that now we have extend stencils :: Store k (Maybe a) -> Store k Int
,
which runs that windowed function across the entire map.
countMatches :: [Map (V2 Int) a] -> Map (V2 Int) Char -> Int
countMatches stencils xs =
sum . mapFromStore (M.keysSet xs) . extend (matchAnyMap stencils) . mapToStore $ xs
part1 :: Map (V2 Int) Char -> Int
part1 = countMatches xmas
part2 :: Map (V2 Int) Char -> Int
part2 = countMatches crossMas
>> Day 04a
benchmarking...
time 37.83 ms (37.05 ms .. 38.53 ms)
0.998 R² (0.991 R² .. 1.000 R²)
mean 38.29 ms (37.96 ms .. 39.21 ms)
std dev 1.043 ms (345.7 μs .. 1.881 ms)
* parsing and formatting times excluded
>> Day 04b
benchmarking...
time 22.07 ms (21.94 ms .. 22.20 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 22.06 ms (21.99 ms .. 22.13 ms)
std dev 156.2 μs (117.2 μs .. 204.2 μs)
* parsing and formatting times excluded
Top / Prompt / Code / Standalone
This one lends itself pretty nicely to basically topologically sorting each page list according to the graph of "X preceeds Y" edges.
If we have a list of (Int, Int)
rules, we can build a graph where the nodes
are the page numbers and the edges are "X preceeds Y".
Then for each page list, we can filter that graph for only the nodes in that page list, and then toposort it:
import qualified Data.Graph.Inductive as G
sortByRules :: [(Int, Int)] -> [Int] -> [Int]
sortByRules rules = \xs ->
G.topsort . G.nfilter (`S.member` S.fromList xs) $ ruleGraph
where
ruleGraph :: G.Gr () ()
ruleGraph =
G.mkUGraph
(nubOrd $ foldMap (\(x,y) -> [x,y]) rules)
rules
part1 :: [(Int, Int)] -> [[Int]] -> Int
part1 rules pages = sum
[ middleVal orig
| orig <- pages
, orig == sorter orig
]
where
sorter = sortByRules rules
part2 :: [(Int, Int)] -> [[Int]] -> Int
part2 rules pages = sum
[ middleVal sorted
| orig <- pages
, let sorted = sorter orig
, orig /= sorted
]
where
sorter = sortByRules rules
We write sortByRules
with a lambda closure (and name sorters
) to ensure
that the graph is generated only once and then the closure re-applied for
every page list.
One cute way to find the middle value is to traverse the list twice at the same time "in parallel", but one list twice as quickly as the other:
middleVal :: [a] -> a
middleVal xs0 = go xs0 xs0
where
go (_:xs) (_:_:ys) = go xs ys
go (x:_) _ = x
>> Day 05a
benchmarking...
time 18.31 ms (18.13 ms .. 18.47 ms)
0.999 R² (0.999 R² .. 1.000 R²)
mean 18.42 ms (18.27 ms .. 18.57 ms)
std dev 359.7 μs (219.9 μs .. 538.4 μs)
* parsing and formatting times excluded
>> Day 05b
benchmarking...
time 17.68 ms (17.64 ms .. 17.72 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 17.69 ms (17.65 ms .. 17.72 ms)
std dev 93.29 μs (64.40 μs .. 139.0 μs)
* parsing and formatting times excluded
Top / Prompt / Code / Standalone
This one features a common staple of Advent of Code: the 2D grid. In this case
we can parse it as a Set Point
of boulders and an initial starting Point
,
with type Point = V2 Int
from the linear library, which has good Num
,
Functor
, Foldable
instances etc.
Then the (possibly infinite) stepping function becomes:
import Data.Finite
import Linear.V2
import qualified Data.Set as S
import qualified Data.Vector.Sized as SV
type Point = V2 Int
stepInDir :: Finite 4 -> Point
stepInDir = SV.index $ SV.fromTuple (V2 0 (-1), V2 1 0, V2 0 1, V2 (-1) 0)
stepPath :: Int -> S.Set Point -> Point -> [(Point, Finite 4)]
stepPath maxCoord boulders = takeWhile inBounds . iterate go . (,0)
where
go (x, d)
| x' `S.member` boulders = (x, d + 1)
| otherwise = (x', d)
where
x' = x + stepInDir d
inBounds = all (inRange (0, maxCoord))
part1 :: Set Point -> Point -> Int
part1 boulders = S.size . S.fromList . map fst . stepPath maxCoord boulders
where
maxCoord = maximum (foldMap toList boulders)
Here I use Finite 4
to give a cyclic type I can repeatedly rotate, and look
up a single step in that direction from 4-vector. In my actual code I use a
data type data Dir = North | East | South | West
that is essentially the same
thing.
For part 2 we can just try to insert new boulders along the original route and count the boulders that give loops. We can use tortoise and hare to do loop detection.
hasLoop :: Eq a => [a] -> Bool
hasLoop xs0 = go xs0 (drop 1 xs0)
where
go (x:xs) (y:_:ys) = x == y || go xs ys
go _ _ = False
part2 :: Set Point -> Point -> Int
part2 boulders p0 = length . filter goodBoulder . nubOrd $ stepPath maxCoord boulders
where
maxCoord = maximum (foldMap toList boulders)
goodBoulder p = p /= p0 && hasLoop (stepPath maxCoord (S.insert p boulders) p)
Overall runs in about 1 second on my machine. You could optimize it a bit by jumping directly to the next boulder. Basically you'd keep a map of x to the y's of all boulders in that column so you can move vertically, and then a map of y to the x's of all boulders in that row so you can move horizontally.
collapseAxes :: Foldable f => f Point -> V2 (Map Int (Set Int))
collapseAxes = foldl' (flip addAxesMap) mempty
addAxesMap :: Point -> V2 (Map Int (Set Int)) -> V2 (Map Int (Set Int))
addAxesMap (V2 x y) (V2 xMaps yMaps) =
V2
(M.insertWith (<>) x (S.singleton y) xMaps)
(M.insertWith (<>) y (S.singleton x) yMaps)
slideAxes :: V2 (Map Int (Set Int)) -> Point -> Finite 4 -> Maybe Point
slideAxes (V2 xMap yMap) (V2 x y) = SV.index $ SV.fromTuple
( S.lookupLT y (M.findWithDefault mempty x xMap) <&> \y' -> V2 x (y' + 1)
, S.lookupGT x (M.findWithDefault mempty y yMap) <&> \x' -> V2 (x' - 1) y
, S.lookupGT y (M.findWithDefault mempty x xMap) <&> \y' -> V2 x (y' - 1)
, S.lookupLT x (M.findWithDefault mempty y yMap) <&> \x' -> V2 (x' + 1) y
)
stepPath' :: V2 (Map Int (Set Int)) -> Point -> [(Point, Finite 4)]
stepPath' as = unfoldr go . (,0)
where
go (p, d) = do
p' <- slideAxes as p d
pure ((p', d + 1), (p', d + 1))
part2' :: Set Point -> Point -> Int
part2' boulders p0 = length . filter goodBoulder . nubOrd $ stepPath maxCoord boulders
where
maxCoord = maximum (foldMap toList boulders)
axesMap0 = collapseAxes boulders
goodBoulder p = p /= p0 && hasLoop (stepPath' (addAxesMap p axesMap0) p)
This is cuts the time by about 30x.
>> Day 06a
benchmarking...
time 1.452 ms (1.432 ms .. 1.470 ms)
0.999 R² (0.998 R² .. 0.999 R²)
mean 1.448 ms (1.440 ms .. 1.462 ms)
std dev 37.35 μs (24.50 μs .. 53.82 μs)
variance introduced by outliers: 14% (moderately inflated)
* parsing and formatting times excluded
>> Day 06b
benchmarking...
time 36.06 ms (35.95 ms .. 36.17 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 35.89 ms (35.69 ms .. 35.99 ms)
std dev 290.5 μs (156.0 μs .. 433.8 μs)
* parsing and formatting times excluded
Top / Prompt / Code / Standalone
This one works out well as a list monad based search. Essentially you are picking operations where:
targ == (x ? y) ? z
and if those ?
operations induce a list monad split, you can then search all
of the possible choices:
checkEquation :: [Int -> Int -> Int] -> Int -> [Int] -> Bool
checkEquation ops targ xs = targ `elem` foldl1M branchOnOp xs
where
branchOnOp a b = map (\f -> f a b) ops
Then you can do checkEquation [(+),(*)]
for part 1 and checkEquation [(+),(*),cat]
for part 2.
However, it is kind of helpful to work backwards from the target to see if you
can get the initial number. For example, in 292: 11 6 16 20
, you can
eliminate *
as an option for the final operation right off the bat.
So really, you can rephrase the problem as:
x == y ? (z ? targ)
where ?
are the inverse operations, but you have some way to easily eliminate
operations that don't make sense.
checkBackEquation :: [Int -> Int -> Maybe Int] -> Int -> [Int] -> Bool
checkBackEquation unOps targ (x:xs) = x `elem` foldrM branchOnUnOp targ xs
where
branchOnUnOp a b = mapMaybe (\f -> f a b) unOPs
And our un-ops are:
unAdd :: Int -> Int -> Maybe Int
unAdd x y = [y - x | y >= x]
unMul :: Int -> Int -> Maybe Int
unMul x y = [y `div` x | y `mod` x == 0]
unCat :: Int -> Int -> Maybe Int
unCat x y = [d | m == x]
where
pow = length . takeWhile (< x) $ iterate (* 10) 1
(d, m) = y `divMod` (10 ^ pow)
So part 1 is checkBackEquation [unAdd, unMul]
and part 2 is
checkBackEquation [unAdd, unMul, unCat]
.
Timing-wise, moving from forwards to backwards brought my times for part 2 from 380ms to 1.2ms.
>> Day 07a
benchmarking...
time 685.2 μs (680.2 μs .. 692.3 μs)
0.989 R² (0.975 R² .. 0.999 R²)
mean 723.5 μs (701.6 μs .. 756.7 μs)
std dev 94.38 μs (57.31 μs .. 128.7 μs)
variance introduced by outliers: 84% (severely inflated)
* parsing and formatting times excluded
>> Day 07b
benchmarking...
time 1.260 ms (1.258 ms .. 1.262 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 1.259 ms (1.258 ms .. 1.260 ms)
std dev 3.710 μs (2.848 μs .. 4.910 μs)
* parsing and formatting times excluded
Top / Prompt / Code / Standalone
Mostly straightforward Haskell, building up the set of all antinodes by iterating over every pair of antennae. The main thing we parameterize over is the way of generating the antinode points from a given pair of locations.
makeAntinodes :: Eq a => Map Point a -> (Point -> Point -> [Point]) -> Set Point
makeAntinodes mp genPts = S.fromList do
(p1, c1) <- M.toList mp
(p2, c2) <- M.toList mp
guard $ p1 /= p2 && c1 == c2
genPts p1 p2
day08 :: (Point -> Point -> [Point]) -> Map Point Char -> Int
day08 stepper mp = S.size $
makeAntinodes ants \p1 p2 ->
takeWhile (`S.member` allPoints) $ stepper p1 p2
where
allPoints = M.keysSet mp
ants = M.filter (/= '.') mp
day08a :: Map Point Char -> Int
day08a = day08 \p1 p2 -> [p2 + p2 - p1]
day08b :: Map Point Char -> Int
day08b = day08 \p1 p2 -> iterate (+ (p2 - p1)) p2
>> Day 08a
benchmarking...
time 590.0 μs (587.8 μs .. 592.0 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 585.1 μs (583.2 μs .. 586.6 μs)
std dev 5.404 μs (4.505 μs .. 6.494 μs)
* parsing and formatting times excluded
>> Day 08b
benchmarking...
time 990.5 μs (987.5 μs .. 996.0 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 994.3 μs (991.8 μs .. 997.7 μs)
std dev 9.220 μs (8.039 μs .. 11.41 μs)
* parsing and formatting times excluded
Top / Prompt / Code / Standalone
Both of these today involve consuming queues, but the nature of the queues are different. For part 1, we consume two queues: the queue of gaps from left to right, and the queue of files from right to left. For part 2, we consume the queue of file blocks from right to left.
We can actually consume the queues in both cases directly into their checksum without going through an intermediate structure, which is kind of convenient too.
First, let's parse the list of numbers into a usable state: for gaps, an
IntMap
of positions to gap sizes, and for file blocks, an IntMap
of
positions to id's and fids.
toDiskState :: [a] -> [Int] -> (IntMap Int, IntMap (a, Int))
toDiskState fids =
IM.mapEither splitter
. IM.fromList
. snd
. mapAccumL go 0
. zip (intersperse Nothing (Just <$> fids))
where
go i (mfid, len) = (i + len, (i, (mfid, len)))
splitter (mfid, len) = case mfid of
Nothing -> Left len
Just fid -> Right (fid, len)
For part 1, the behavior of the queues is non-trivial so it's helpful to write it using explicit recursion. The first queue is the queue of gaps (which we push-back on with a smaller gap length) and the second queue is the queue of reversed single (file slot index, file id) that we pop one-by-one. We also short-circuit to the end if our forward gap indices move past our backwards file indices.
fillGaps
:: [(Int, Int)] -- ^ list of (gap starting Index, gap length) left-to-right
-> [(Int, Int)] -- ^ list of (single file slot index, file id) right-to-left
-> Int
fillGaps [] ends = sum $ map (uncurry (*)) ends
fillGaps _ [] = 0
fillGaps ((gapI, gapLen):gaps) ((endI, fid):ends)
| endI > gapI -> gapI * fid + fillGaps (addBack gaps) ends
| otherwise -> endI * fid + sum (map (uncurry (*)) ends)
where
addBack
| gapLen == 1 = id
| otherwise = ((gapI + 1, gapLen - 1) :)
part1 :: IntMap Int -> IntMap (Int, Int) -> Int
part1 gaps files =
fillGaps
(IM.toList gaps)
[ (i, fid)
| (i0, (fid, len)) <- IM.toDescList dsFiles
, i <- take len $ iterate (subtract 1) (i0 + len - 1)
]
For part 2, our queue consumption is pretty typical, with no re-push or
short-circuiting. We just move through every single file in reverse once, so it
can be captured as a mapAccumL
: a stateful map over the backwards file
blocks, where state is the empty slot candidates.
moveBlock :: IntMap Int -> (Int, (Int, Int)) -> (IntMap Int, Int)
moveBlock gaps (i, (fid, fileLen)) = (gaps', hereContrib)
where
foundGap = find ((>= fileLen) . snd) . IM.toAscList $ IM.takeWhileAntitone (< i) gaps
hereContrib = fid * ((fileLen * (fileLen + 1)) `div` 2 + fileLen * (maybe i fst foundGap - 1))
gaps' = case foundGap of
Nothing -> gaps
Just (gapI, gapLen) ->
let addBack
| gapLen > fileLen = IM.insert (gapI + fileLen) (gapLen - fileLen)
| otherwise = id
in addBack . IM.delete gapI $ gaps
part2 :: IntMap Int -> IntMap (Int, Int) -> Int
part2 gaps files = sum . snd . mapAccumL moveBlock gaps $ IM.toDescList files
>> Day 09a
benchmarking...
time 6.924 ms (6.781 ms .. 7.068 ms)
0.986 R² (0.972 R² .. 0.997 R²)
mean 7.307 ms (7.129 ms .. 7.627 ms)
std dev 668.0 μs (424.9 μs .. 912.4 μs)
variance introduced by outliers: 53% (severely inflated)
* parsing and formatting times excluded
>> Day 09b
benchmarking...
time 16.25 ms (16.15 ms .. 16.33 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 16.27 ms (16.23 ms .. 16.31 ms)
std dev 96.57 μs (73.62 μs .. 132.0 μs)
* parsing and formatting times excluded
Top / Prompt / Code / Standalone
A lot of times in Haskell, two problems end up having the same algorithm, just
with a different choice of Monoid
. This puzzle is a good example of that.
We can do a simple DFS and collect all 9's into a monoid:
gatherNines :: Monoid m => (Point -> m) -> Map Point Int -> Point -> m
gatherNines f mp = go 0
where
go x p
| x == 9 = f p
| otherwise =
foldMap (go (x+1)) . M.keys . M.filter (== (x+1)) $ mp `M.restrictKeys` neighbs
where
neighbs = S.fromList $ (p +) <$> [V2 0 (-1), V2 1 0, V2 0 1, V2 (-1) 0]
For part 1 the monoid is Set Point
(the unique 9's) and for part 2 the monoid
is Sum Int
(number of paths)
solve :: Monoid m => (Point -> m) -> (m -> Int) -> Map Point Int -> Int
solve gather observe mp =
sum . map (observe . gatherNines gather mp) . M.keys $ M.filter (== 0) mp
part1 :: Map Point Int -> Int
part1 = solve S.singleton S.size
part2 :: Map Point Int -> Int
part2 = solve (const (Sum 1)) getSum
>> Day 10a
benchmarking...
time 4.814 ms (4.787 ms .. 4.843 ms)
1.000 R² (0.999 R² .. 1.000 R²)
mean 4.824 ms (4.801 ms .. 4.852 ms)
std dev 78.24 μs (54.79 μs .. 116.7 μs)
* parsing and formatting times excluded
>> Day 10b
benchmarking...
time 4.727 ms (4.713 ms .. 4.753 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 4.736 ms (4.726 ms .. 4.752 ms)
std dev 37.76 μs (28.79 μs .. 49.63 μs)
* parsing and formatting times excluded
Top / Prompt / Code / Standalone
Today's "one trick" seems to be realizing that the actual ordered "list" is a red herring: a number's progression doesn't depend on any of its neighbors or ordering. So what we really have is not a list, but a multi-set. Stepping the multiset through 75 iterations is very efficient --- shows you what you gain when you use the correct data structure to represent the state!
freqs :: [Int] -> IntMap Int
freqs = IM.fromListWith (+) . map (,1)
stepMap :: IntMap Int -> IntMap Int
stepMap mp = IM.unionsWith (+)
[ (* n) <$> freqs (step x)
| (x, n) <- IM.toList mp
]
step :: Int -> [Int]
step c
| c == 0 = [1]
| even pow = let (a, b) = c `divMod` (10 ^ (pow `div` 2)) in [a, b]
| otherwise = [c * 2024]
where
pow = length . takeWhile (<= x) $ iterate (* 10) 1
part1 :: [Int] -> Int
part1 = sum . (!! 25) . iterate stepMap . freqs
part2 :: [Int] -> Int
part2 = sum . (!! 75) . iterate stepMap . freqs
My original reflections/write-up used data-memocombinators, but after some thought I believe that the frequency map approach is the most natural.
>> Day 11a
benchmarking...
time 593.6 μs (592.6 μs .. 594.4 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 593.3 μs (592.4 μs .. 594.6 μs)
std dev 3.930 μs (2.585 μs .. 5.749 μs)
* parsing and formatting times excluded
>> Day 11b
benchmarking...
time 45.70 ms (45.39 ms .. 46.05 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 44.66 ms (44.05 ms .. 44.97 ms)
std dev 887.1 μs (525.0 μs .. 1.377 ms)
* parsing and formatting times excluded
Top / Prompt / Code / Standalone
First of all, let's assume we had a function that took a set and found all contiguous regions of that set:
contiguousRegions :: Set Point -> [Set Point]
Now we can take a Map Point a
and then assume a map of a's to all of the
contiuous regions:
regions :: Ord a => Map Point a -> Map a [Set Point]
regions mp =
contiguousRegions
<$> M.fromListWith (<>) [ (x, S.singleton p) | (p, x) <- M.toList mp ]
Now it helps to take a region and create four sets: the first, all of the region's external neighbors to the north, the second, all of the region's external enghbors to the west, then south, then east, etc.:
neighborsByDir :: Set Point -> [Set Point]
neighborsByDir pts = neighborsAt <$> [V2 0 1, V2 1 0, V2 0 (-1), V2 (-1) 0]
where
neighborsAt d = S.map (+ d) pts `S.difference` pts
Now part 1 basically is the size of all of those points, and part 2 is the number of contiguous regions of those points:
solve :: Ord a => (Set Point -> Int) -> Map Point a -> Int
solve countFences mp = sum
[ S.size region * countFences dirRegion
| letterRegions <- regions mp
, region <- letterRegions
, dirRegion <- neighborsByDir region
]
part1 :: Ord a => Map Point a -> Int
part1 = solve S.size
part2 :: Ord a => Map Point a -> Int
part2 = solve (length . contiguousRegions)
Okay I'll admit that I had contiguousRegions
saved from multiple years of
Advent of Code. The actual source isn't too pretty, but I'm including it here
for completion's sake. In my actual code I use set and non-empty set
instead of list and set.
-- | Find contiguous regions by cardinal neighbors
contiguousRegions :: Set Point -> Set (NESet Point)
contiguousRegions = startNewPool S.empty
where
startNewPool seenPools remaining = case S.minView remaining of
Nothing -> seenPools
Just (x, xs) ->
let (newPool, remaining') = fillUp (NES.singleton x) S.empty xs
in startNewPool (S.insert newPool seenPools) remaining'
fillUp boundary internal remaining = case NES.nonEmptySet newBoundary of
Nothing -> (newInternal, remaining)
Just nb -> fillUp nb (NES.toSet newInternal) newRemaining
where
edgeCandidates = foldMap' cardinalNeighbsSet boundary `S.difference` internal
newBoundary = edgeCandidates `S.intersection` remaining
newInternal = NES.withNonEmpty id NES.union internal boundary
newRemaining = remaining `S.difference` edgeCandidates
>> Day 12a
benchmarking...
time 44.45 ms (42.17 ms .. 49.85 ms)
0.972 R² (0.929 R² .. 1.000 R²)
mean 43.86 ms (42.95 ms .. 47.64 ms)
std dev 2.986 ms (792.5 μs .. 5.596 ms)
variance introduced by outliers: 20% (moderately inflated)
* parsing and formatting times excluded
>> Day 12b
benchmarking...
time 42.87 ms (42.16 ms .. 43.47 ms)
0.999 R² (0.999 R² .. 1.000 R²)
mean 42.53 ms (42.38 ms .. 42.77 ms)
std dev 363.0 μs (176.7 μs .. 597.5 μs)
* parsing and formatting times excluded
>> Day 13a
benchmarking...
time 1.887 ms (1.881 ms .. 1.893 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 1.891 ms (1.887 ms .. 1.898 ms)
std dev 18.44 μs (14.30 μs .. 26.49 μs)
* parsing and formatting times excluded
>> Day 13b
benchmarking...
time 12.28 ms (12.09 ms .. 12.54 ms)
0.998 R² (0.996 R² .. 1.000 R²)
mean 12.06 ms (12.00 ms .. 12.22 ms)
std dev 246.8 μs (174.1 μs .. 402.2 μs)
* parsing and formatting times excluded
Top / Prompt / Code / Standalone
Problems like this showcase the utility of using V2
from linear for keeping
track of points. The "step" function ends up pretty clean:
type Point = V2 Int
step :: Point -> Point -> Point
step v x = mod <$> (x + v) <*> V2 101 103
Also, if we parse into [V2 Point]
(a position and velocity paired up in a
V2
) we can use sequence
to unzip our list into a V2 [Point] [Point]
, a
list of positions and velocities. We can then use iterate
and zipWith
to
step them:
part1 :: [V2 Point] -> Int
part2 pvs = score $ iterate (zipWith step vs) ps !! 100
where
V2 ps vs = sequence pvs
score = product . M.fromListWith (+) . mapMaybe (\p -> (classify p, 1))
quadrant p = mfilter (notElem EQ) $ Just (compare <$> p <*> V2 50 51)
quadrant
here uses the Applicative
instance and also the Foldable
instance with notElem
.
For my original solve of part 2, i stopped when I detected any large clusters. But, once we see that the actual input consists of vertical and horizontal lines, we can do a bit of optimizations. We know that the x positions have a period of 101, and so frames with vertical lines appear with period 101. We know that y positions have a period of 103 and so frames with horizontal lines appear with period 103. So, we can look at the first 101 frames and find any vertical lines, and then the first 103 frames and find any horizontal lines, and then do some math to figure out when the periodic appearances will line up.
maxMargin :: [[Int]] -> Int
maxMargin = fst . maximumBy (comparing (concentration . snd)) . zip [0..]
where
concentration = product . M.fromListWith (+) . map (,1)
part1 :: [V2 Point] -> Int
part2 pvs = (xi + ((yi - xi) * 5151)) `mod` 10403
where
V2 ps vs = sequence pvs
steps = iterate (zipWith step vs) ps
xi = maxMargin (view _x <$> take 101 steps)
yi = maxMargin (view _y <$> take 103 steps)
>> Day 14a
benchmarking...
time 1.251 ms (1.246 ms .. 1.255 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 1.251 ms (1.244 ms .. 1.261 ms)
std dev 26.18 μs (21.12 μs .. 32.54 μs)
* parsing and formatting times excluded
>> Day 14b
benchmarking...
time 14.04 ms (13.92 ms .. 14.25 ms)
0.999 R² (0.999 R² .. 1.000 R²)
mean 13.92 ms (13.88 ms .. 14.02 ms)
std dev 148.2 μs (69.32 μs .. 252.7 μs)
* parsing and formatting times excluded
Top / Prompt / Code / Standalone
This is puzzle involves iteratively following "steps" and seeing how things
change. If we store the world state polymorphically as a Map Point a
, then
we can write something generic to unite both parts.
Our polymorphic stepper will take a:
-
Set Point
of immovable walls - A "glue" function
Point -> Dir -> a -> [(Point, a)]
which takes ana
world entity and return any other entity it will be glued to. - A starting state
(Point, Map Point a)
, the player position and the position of the crates - A
Dir
motion
and return the new updated (Point, Map Point a)
state.
It will work by first trying to update the person state: if it moves into a
crate, try to move the crate in the same direction, Point -> Map Point a -> a -> Maybe (Map Point a)
. This will then recursively try to move any crates
along the way and any crates glued to it. The whole thing is wrapped up in a
big Maybe
monad, sequenced together with foldlM
, so if anything fails, the
whole thing fails. This is essentially a recursion-based DFS.
type Point = V2 Int
data Dir = North | East | South | West
moveByDir :: Point -> Dir -> Point
moveByDir p d = p + case d of
North -> V2 0 1
East -> V2 1 0
South -> V2 0 (-1)
West -> V2 (-1) 1
stepper ::
forall a.
(Point -> Dir -> a -> [(Point, a)]) ->
Set Point ->
(Point, Map Point a) ->
Dir ->
(Point, Map Point a)
stepper glue walls (person, crates) d
| person' `S.member` walls = (person, crates)
| otherwise = case M.lookup person' crates of
Just lr -> maybe (person, crates) (person',) $ tryMove person' crates lr
Nothing -> (person', crates)
where
person' = person `moveByDir` d
tryMove :: Point -> Map Point a -> a -> Maybe (Map Point a)
tryMove p crates' moved = do
foldlM (\cs (p', moved') -> tryMoveSingle p' cs moved') crates' ((p, moved) : glue p d moved)
tryMoveSingle :: Point -> Map Point a -> a -> Maybe (Map Point a)
tryMoveSingle p crates' moved =
commit
<$> if p' `S.member` walls
then Nothing
else case M.lookup p' crates' of
Just lr -> tryMove p' crates' lr
Nothing -> Just crates'
where
p' = p `moveByDir` d
commit = M.delete p . M.insert p' moved
Now to pick the glue and the a
: for part 1, each crate contains no extra
information, so a
will be ()
and glue _ _ _ = []
, no glue.
part1 :: Set Point -> Set Point -> Point -> [Dir] -> Set Point
part1 crates walls person =
M.keys . snd . foldl' (stepper glue crates) (person, M.fromSet (const ()) walls)
where
glue _ _ _ = []
For part 2, each crate is either a [
or a ]
, left or right. So we can have
the a
be Bool
, and the glue being the corresponding pair, but only if the
motion direction is vertical.
part2 :: Set Point -> Map Point Bool -> Point -> [Dir] -> Set Point
part2 crates walls person =
M.keys . snd . foldl' (stepper glue crates) (person, walls)
where
glue p d lr = [(bump lr p, not lr) | d `elem` [North, South]]
bump = \case
False -> (+ V2 1 0)
True -> subtract (V2 1 0)
We can score our set of points:
score :: Set Point -> Int
score = sum . map (\(V2 x y) -> 100 * y + x) . toList
>> Day 15a
benchmarking...
time 2.817 ms (2.795 ms .. 2.832 ms)
0.999 R² (0.998 R² .. 1.000 R²)
mean 2.844 ms (2.819 ms .. 2.912 ms)
std dev 142.4 μs (45.19 μs .. 261.0 μs)
variance introduced by outliers: 32% (moderately inflated)
* parsing and formatting times excluded
>> Day 15b
benchmarking...
time 3.903 ms (3.894 ms .. 3.912 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 3.880 ms (3.870 ms .. 3.890 ms)
std dev 31.83 μs (26.58 μs .. 39.40 μs)
* parsing and formatting times excluded
>> Day 16a
benchmarking...
time 313.6 ms (307.1 ms .. 319.8 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 314.8 ms (313.9 ms .. 316.4 ms)
std dev 1.616 ms (315.0 μs .. 2.207 ms)
variance introduced by outliers: 16% (moderately inflated)
* parsing and formatting times excluded
>> Day 16b
benchmarking...
time 322.2 ms (311.2 ms .. 332.4 ms)
1.000 R² (0.999 R² .. 1.000 R²)
mean 320.8 ms (317.9 ms .. 324.1 ms)
std dev 3.921 ms (2.314 ms .. 5.846 ms)
variance introduced by outliers: 16% (moderately inflated)
* parsing and formatting times excluded
>> Day 17a
benchmarking...
time 2.371 μs (2.248 μs .. 2.531 μs)
0.981 R² (0.962 R² .. 1.000 R²)
mean 2.246 μs (2.206 μs .. 2.338 μs)
std dev 211.0 ns (82.53 ns .. 369.8 ns)
variance introduced by outliers: 87% (severely inflated)
* parsing and formatting times excluded
>> Day 17b
benchmarking...
time 4.444 μs (4.421 μs .. 4.463 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 4.441 μs (4.427 μs .. 4.455 μs)
std dev 52.32 ns (40.38 ns .. 70.82 ns)
* parsing and formatting times excluded
Top / Prompt / Code / Standalone
Honestly there really isn't much to this puzzle other than applying a basic BFS to solve the maze. It isn't really even big enough that a-star would help.
If you parse the maze into an fgl graph, you can use something like sp :: Node -> Node -> gr a b -> Maybe Path
to get the shortest path. However,
because we're here anyway, I'm going to paste in my personal BFS code that I
use for these challenges that I wrote a while ago, where neighborhoods are
given by an n -> Set n
function. It uses a Seq
as its internal queue, which
is my favorite queue type in Haskell.
data BFSState n = BS
{ _bsClosed :: !(Map n (Maybe n))
-- ^ map of item to "parent"
, _bsOpen :: !(Seq n)
-- ^ queue
}
bfs ::
forall n.
Ord n =>
-- | neighborhood
(n -> Set n) ->
-- | start
n ->
-- | target
(n -> Bool) ->
-- | the shortest path, if it exists
Maybe [n]
bfs ex x0 dest = reconstruct <$> go (addBack x0 Nothing (BS M.empty Seq.empty))
where
reconstruct :: (n, Map n (Maybe n)) -> [n]
reconstruct (goal, mp) = drop 1 . reverse $ goreco goal
where
goreco n = n : maybe [] goreco (mp M.! n)
go :: BFSState n -> Maybe (n, Map n (Maybe n))
go BS{..} = case _bsOpen of
Empty -> Nothing
n :<| ns
| dest n -> Just (n, _bsClosed)
| otherwise -> go . S.foldl' (processNeighbor n) (BS _bsClosed ns) $ ex n
addBack :: n -> Maybe n -> BFSState n -> BFSState n
addBack x up BS{..} =
BS
{ _bsClosed = M.insert x up _bsClosed
, _bsOpen = _bsOpen :|> x
}
processNeighbor :: n -> BFSState n -> n -> BFSState n
processNeighbor curr bs0@BS{..} neighb
| neighb `M.member` _bsClosed = bs0
| otherwise = addBack neighb (Just curr) bs0
type Point = V2 Int
cardinalNeighbsSet :: Point -> Set Point
cardinalNeighbsSet p = S.fromDistinctAscList . map (p +) $
[ V2 (-1) 0 , V2 0 (-1) , V2 0 1 , V2 1 0 ]
solveMaze :: Set Point -> Maybe Int
solveMaze walls = length <$> bfs step 0 (== 70)
where
step p = S.filter (all (inRange (0, 70))) $ cardinalNeighbsSet p `S.difference` walls
Now if you have a list of points [Point]
, for part 1 you just solve the maze
after taking the first 1024 of them:
part1 :: [Point] -> Maybe Int
part1 = solveMaze . S.fromList . take 1024
For part 2, you can search for the first success, or you can do a binary search.
-- | Find the lowest value where the predicate is satisfied within the
-- given bounds.
binaryMinSearch :: (Int -> Bool) -> Int -> Int -> Maybe Int
binaryMinSearch p = go
where
go !x !y
| x == mid || y == mid = Just (x + 1)
| p mid = go x mid
| otherwise = go mid y
where
mid = ((y - x) `div` 2) + x
part2 :: [Point] -> Maybe Int
part2 pts = do
j <- binaryMinSearch (isNothing . solveMaze . (!! wallList)) 0 (length pts)
pure $ pts !! (j - 1)
where
wallList = scanl (flip S.insert) S.empty pts
You should probably use a container type with better indexing than a list, though.
>> Day 18a
benchmarking...
time 6.592 ms (6.559 ms .. 6.638 ms)
1.000 R² (0.999 R² .. 1.000 R²)
mean 6.546 ms (6.517 ms .. 6.582 ms)
std dev 105.4 μs (86.20 μs .. 140.9 μs)
* parsing and formatting times excluded
>> Day 18b
benchmarking...
time 13.78 ms (13.73 ms .. 13.83 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 13.80 ms (13.78 ms .. 13.84 ms)
std dev 77.43 μs (59.00 μs .. 101.2 μs)
* parsing and formatting times excluded
Top / Prompt / Code / Standalone
This one can be solved using an infinite trie --- we build up an infinite trie of possibilities using patterns, and then look up a given design by tearing down that trie. Written altogether that gives us a hylomorphism! I've written about using tries with recursion schemes in my blog, so this seemed like a natural extension.
data CharTrie a = CT {ctHere :: Maybe a, ctThere :: IntMap (CharTrie a)}
deriving stock (Show, Functor, Traversable, Foldable)
makeBaseFunctor ''CharTrie
-- generates for us:
data CharTrieF a r = CTF {ctHereF :: Maybe a, ctThereF :: Map Char r}
deriving stock (Show, Functor, Traversable, Foldable)
We can parameterize on a monoid a
to solve both parts. For part 1, a
is
()
: Just ()
means that the design is in the trie, and Nothing
means it is
not. For part 2, a
is Sum Int
: Just (Sum n)
means there are n
ways to get
this design, and Nothing
means the design is unreachable.
First, the lookup algebra, which is standard for tries:
lookupAlg :: CharTrieF a (String -> Maybe a) -> String -> Maybe a
lookupAlg CTF{..} = \case
[] -> ctHereF
c : cs -> ($ cs) =<< M.lookup c ctThereF
If we had a CharTrie a
, then cata lookupAlg myTree "hello"
would look up
"hello"
in the trie.
The buildup co-algebra is an interesting one. We will convert a Map String a
into a CharTrie a
, but, every time we reach the end of the string, we
"restart" the building from the start. So, we'll take a Set String
as well,
which we will trigger when we hit the end of a pattern.
fromMapCoalg ::
forall a.
(Semigroup a) =>
Set String ->
Map String a ->
CharTrieF a (Map String a)
fromMapCoalg mp0 = \ks ->
let x = M.lookup [] ks
reAdd = case x of
Nothing -> id
Just y -> M.unionWith (M.unionWith (<>)) (M.fromSet (const y) <$> initialSplit)
in CTF x $ reAdd (splitTrie ks)
where
initialSplit :: Map Char (Set String)
initialSplit = M.fromAscListWith (<>) [ (k, S.singleton ks) | k : ks <- toList mp0 ]
splitTrie :: Map String a -> Map Char (Map String a)
splitTrie mp = M.fromAscListWith (<>) [ (k, M.singleton ks x) | (k : ks, x) <- M.toList mp ]
And that's it! Our hylomorphism will build up the infinite trie, but only the specific branch that we end up looking up from it. Because it's a hylomorphism, we never actually generate any trie structure: we basically build up only the branch we care about (driven by the lookup) and stop when we finish looking up or hit a dead end.
buildable :: (Semigroup a) => a -> Set String -> String -> Maybe a
buildable x mp = hylo lookupAlg (fromMapCoalg mp) (M.fromSet (const x) mp)
part1 :: Set String -> [String] -> Int
part1 pats = length . mapMaybe (buildable () pats)
part2 :: Set String -> [String] -> Int
part2 pats = getSum . foldMap (fold . buildable (Sum 1) pats)
```
### Day 19 Benchmarks
```
>> Day 19a
benchmarking...
time 274.5 ms (238.8 ms .. 319.5 ms)
0.990 R² (0.965 R² .. 1.000 R²)
mean 291.0 ms (280.4 ms .. 304.0 ms)
std dev 15.07 ms (8.887 ms .. 20.78 ms)
variance introduced by outliers: 16% (moderately inflated)
* parsing and formatting times excluded
>> Day 19b
benchmarking...
time 263.2 ms (252.7 ms .. 273.1 ms)
1.000 R² (0.999 R² .. 1.000 R²)
mean 274.3 ms (268.5 ms .. 285.3 ms)
std dev 10.64 ms (1.505 ms .. 14.39 ms)
variance introduced by outliers: 16% (moderately inflated)
* parsing and formatting times excluded
```
Day 20
------
<!--
This section is generated and compiled by the nix derivation at ./site/default.nix from the the file
`./reflections/2024/day20.md`. If you want to edit this, edit that file instead!
-->
*[Top](#)* / *[Prompt][d20p]* / *[Code][d20g]*
[d20p]: https://adventofcode.com/2024/day/20
[d20g]: https://github.com/mstksg/advent-of-code/blob/master/2024/AOC2024/Day20.hs
[d20s]: https://github.com/mstksg/advent-of-code/blob/main/reflections/2024/day20.md
### Day 20 Benchmarks
```
>> Day 20a
benchmarking...
time 40.47 ms (38.84 ms .. 41.22 ms)
0.995 R² (0.979 R² .. 1.000 R²)
mean 41.80 ms (41.15 ms .. 43.37 ms)
std dev 1.933 ms (950.7 μs .. 3.288 ms)
variance introduced by outliers: 12% (moderately inflated)
* parsing and formatting times excluded
>> Day 20b
benchmarking...
time 393.2 ms (380.9 ms .. 402.4 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 393.2 ms (391.2 ms .. 396.0 ms)
std dev 2.621 ms (1.132 ms .. 3.309 ms)
variance introduced by outliers: 19% (moderately inflated)
* parsing and formatting times excluded
```
Day 21
------
<!--
This section is generated and compiled by the nix derivation at ./site/default.nix from the the file
`./reflections/2024/day21.md`. If you want to edit this, edit that file instead!
-->
*[Top](#)* / *[Prompt][d21p]* / *[Code][d21g]*
[d21p]: https://adventofcode.com/2024/day/21
[d21g]: https://github.com/mstksg/advent-of-code/blob/master/2024/AOC2024/Day21.hs
[d21s]: https://github.com/mstksg/advent-of-code/blob/main/reflections/2024/day21.md
### Day 21 Benchmarks
```
>> Day 21a
benchmarking...
time 3.965 μs (3.959 μs .. 3.970 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 3.967 μs (3.961 μs .. 3.973 μs)
std dev 17.27 ns (14.75 ns .. 20.43 ns)
* parsing and formatting times excluded
>> Day 21b
benchmarking...
time 3.971 μs (3.958 μs .. 3.988 μs)
0.999 R² (0.996 R² .. 1.000 R²)
mean 4.020 μs (3.971 μs .. 4.206 μs)
std dev 304.7 ns (22.58 ns .. 645.9 ns)
variance introduced by outliers: 80% (severely inflated)
* parsing and formatting times excluded
```
Day 22
------
<!--
This section is generated and compiled by the nix derivation at ./site/default.nix from the the file
`./reflections/2024/day22.md`. If you want to edit this, edit that file instead!
-->
*[Top](#)* / *[Prompt][d22p]* / *[Code][d22g]*
[d22p]: https://adventofcode.com/2024/day/22
[d22g]: https://github.com/mstksg/advent-of-code/blob/master/2024/AOC2024/Day22.hs
[d22s]: https://github.com/mstksg/advent-of-code/blob/main/reflections/2024/day22.md
### Day 22 Benchmarks
```
>> Day 22a
benchmarking...
time 84.47 ms (82.54 ms .. 88.08 ms)
0.998 R² (0.993 R² .. 1.000 R²)
mean 83.30 ms (82.68 ms .. 84.79 ms)
std dev 1.625 ms (543.2 μs .. 2.715 ms)
* parsing and formatting times excluded
>> Day 22b
benchmarking...
time 5.653 s (5.314 s .. 6.426 s)
0.998 R² (NaN R² .. 1.000 R²)
mean 6.488 s (6.099 s .. 7.163 s)
std dev 642.2 ms (58.08 ms .. 808.9 ms)
variance introduced by outliers: 23% (moderately inflated)
* parsing and formatting times excluded
```
Day 23
------
<!--
This section is generated and compiled by the nix derivation at ./site/default.nix from the the file
`./reflections/2024/day23.md`. If you want to edit this, edit that file instead!
-->
*[Top](#)* / *[Prompt][d23p]* / *[Code][d23g]* / *[Standalone][d23s]*
[d23p]: https://adventofcode.com/2024/day/23
[d23g]: https://github.com/mstksg/advent-of-code/blob/master/2024/AOC2024/Day23.hs
[d23s]: https://github.com/mstksg/advent-of-code/blob/main/reflections/2024/day23.md
This one end up being a nice hylomorphism.
We can build the upper triangle of the adjacency map: only include edges from
items to items later in the alphabet.
```haskell
connMap :: Ord a => [(a, a)] -> Map a (Set a)
connMap xs =
M.unionsWith
(<>)
[ M.fromList [(a, S.singleton b), (a, S.empty)]
| [a, b] <- xs <&> \(x, y) -> sort [x, y]
]
```
Part 1 we can manually unroll:
```haskell
part1 :: Map a (Set a) -> Int
part1 conns = length do
(a, adjA) <- M.toList conns
b <- toList adjA
c <- toList $ (conns M.! b) `S.intersection` adjA
guard $ any ("t" `isPrefixOf`) [a, b, c]
```
This is using the list monad's non-determinism for a depth first search:
For every item `a`, all of the items `b` in its adjacencies are valid in its
triple. From there we can add any item `c` in the adjacencies of `b`, provided
`c` is also in `fromA`, the adjacencies from `as`.
Part 2 is where things get fun. One way to look at it is, from each starting
point, build a tree of all adjacency hops from it at are valid: each next child
they must be reachable from all of its parents. Then, collapse all branching
paths from top to bottom.
Therefore, our base functor is a list of parents to children:
```haskell
newtype Branch a = Branch { unBranch :: [(String, a)] }
deriving Functor
```
And now we are in good shape to write our hylomorphism:
```haskell
allCliques :: Ord a => Map a (Set a) -> [[a]]
allCliques conns = hylo tearDown build (M.toList conns)
where
build = Branch
. map (\(a, cands) -> (a, [(b, cands `S.intersection` (conns M.! b)) | b <- toList cands]))
tearDown = foldMap (\(here, there) -> (here :) <$> if null there then pure [] else there)
. unBranch
part2 :: Map a (Set a) -> [a]
part2 = maximumBy (comparing length) . allCliques
```
### Day 23 Benchmarks
```
>> Day 23a
benchmarking...
time 3.750 ms (3.729 ms .. 3.780 ms)
0.998 R² (0.995 R² .. 1.000 R²)
mean 3.789 ms (3.762 ms .. 3.836 ms)
std dev 127.3 μs (65.75 μs .. 221.4 μs)
variance introduced by outliers: 16% (moderately inflated)
* parsing and formatting times excluded
>> Day 23b
benchmarking...
time 48.57 ms (48.41 ms .. 48.73 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 48.66 ms (48.57 ms .. 48.78 ms)
std dev 200.5 μs (137.2 μs .. 264.3 μs)
* parsing and formatting times excluded
```
Day 24
------
<!--
This section is generated and compiled by the nix derivation at ./site/default.nix from the the file
`./reflections/2024/day24.md`. If you want to edit this, edit that file instead!
-->
*[Top](#)* / *[Prompt][d24p]* / *[Code][d24g]*
[d24p]: https://adventofcode.com/2024/day/24
[d24g]: https://github.com/mstksg/advent-of-code/blob/master/2024/AOC2024/Day24.hs
[d24s]: https://github.com/mstksg/advent-of-code/blob/main/reflections/2024/day24.md
### Day 24 Benchmarks
```
>> Day 24a
benchmarking...
time 97.06 μs (95.96 μs .. 97.88 μs)
0.999 R² (0.999 R² .. 1.000 R²)
mean 94.19 μs (93.47 μs .. 94.95 μs)
std dev 2.477 μs (2.155 μs .. 2.931 μs)
variance introduced by outliers: 23% (moderately inflated)
* parsing and formatting times excluded
>> Day 24b
benchmarking...
time 1.696 s (1.664 s .. 1.734 s)
1.000 R² (1.000 R² .. 1.000 R²)
mean 1.699 s (1.690 s .. 1.712 s)
std dev 12.02 ms (88.75 μs .. 14.67 ms)
variance introduced by outliers: 19% (moderately inflated)
* parsing and formatting times excluded
```
Day 25
------
<!--
This section is generated and compiled by the nix derivation at ./site/default.nix from the the file
`./reflections/2024/day25.md`. If you want to edit this, edit that file instead!
-->
*[Top](#)* / *[Prompt][d25p]* / *[Code][d25g]* / *[Standalone][d25s]*
[d25p]: https://adventofcode.com/2024/day/25
[d25g]: https://github.com/mstksg/advent-of-code/blob/master/2024/AOC2024/Day25.hs
[d25s]: https://github.com/mstksg/advent-of-code/blob/main/reflections/2024/day25.md
As usual, a nice relaxing day to celebrate Christmas :)
Assuming we have a list of keys and locks interspersed, as `[Set (Int, Int)]`, we
can marginalize to get the x-wise histograms and y-wise histograms:
```haskell
marginX :: Set (Int, Int) -> Map Int Int
marginX = M.fromListWith (+) . map (\(x, y) -> (x, 1)) . toList
marginY :: Set (Int, Int) -> Map Int Int
marginY = M.fromListWith (+) . map (\(x, y) -> (y, 1)) . toList
```
We can distinguish keys from locks by checking if y=0 has all 5 points filled:
```haskell
isLock :: Set (Int, Int) -> Bool
isLock = (== 5) . M.findWithDefault 0 0 . marginY
```
We can check if a pair is valid by checking that none of their x margins add up
to greater than 7. Wrapping it all in the list monad's cartesian product and
we get:
```haskell
day25 :: [Set (Int, Int)] -> Int
day25 = uncurry countCombos . partition isLock
where
countCombos locks keys = length do
lock <- marginX <$> locks
key <- marginX <$> keys
guard $ all (< 8) (M.unionWith (+) lock key)
```
### Day 25 Benchmarks
```
>> Day 25a
benchmarking...
time 6.789 ms (6.668 ms .. 6.890 ms)
0.990 R² (0.973 R² .. 1.000 R²)
mean 6.953 ms (6.849 ms .. 7.262 ms)
std dev 552.1 μs (61.43 μs .. 1.049 ms)
variance introduced by outliers: 46% (moderately inflated)
* parsing and formatting times excluded
```