Skip Navigation

🖨️ - 2024 DAY 5 SOLUTIONS - 🖨️

Day 5: Print Queue

Megathread guidelines

  • Keep top level comments as only solutions, if you want to say something other than a solution put it in a new post. (replies to comments can be whatever)
  • You can send code in code blocks by using three backticks, the code, and then three backticks or use something such as https://topaz.github.io/paste/ if you prefer sending it through a URL

FAQ

57 comments
  • Factor

     undefined
        
    : get-input ( -- rules updates )
      "vocab:aoc-2024/05/input.txt" utf8 file-lines
      { "" } split1
      "|" "," [ '[ [ _ split ] map ] ] bi@ bi* ;
    
    : relevant-rules ( rules update -- rules' )
      '[ [ _ in? ] all? ] filter ;
    
    : compliant? ( rules update -- ? )
      [ relevant-rules ] keep-under
      [ [ index* ] with map first2 < ] with all? ;
    
    : middle-number ( update -- n )
      dup length 2 /i nth-of string>number ;
    
    : part1 ( -- n )
      get-input
      [ compliant? ] with
      [ middle-number ] filter-map sum ;
    
    : compare-pages ( rules page1 page2 -- <=> )
      [ 2array relevant-rules ] keep-under
      [ drop +eq+ ] [ first index zero? +gt+ +lt+ ? ] if-empty ;
    
    : correct-update ( rules update -- update' )
      [ swapd compare-pages ] with sort-with ;
    
    : part2 ( -- n )
      get-input dupd
      [ compliant? ] with reject
      [ correct-update middle-number ] with map-sum ;
    
      

    on GitHub

  • Kotlin

    Took me a while to figure out how to sort according to the rules. 🤯

     undefined
        
    fun part1(input: String): Int {
        val (rules, listOfNumbers) = parse(input)
        return listOfNumbers
            .filter { numbers -> numbers == sort(numbers, rules) }
            .sumOf { numbers -> numbers[numbers.size / 2] }
    }
    
    fun part2(input: String): Int {
        val (rules, listOfNumbers) = parse(input)
        return listOfNumbers
            .filterNot { numbers -> numbers == sort(numbers, rules) }
            .map { numbers -> sort(numbers, rules) }
            .sumOf { numbers -> numbers[numbers.size / 2] }
    }
    
    private fun sort(numbers: List<Int>, rules: List<Pair<Int, Int>>): List<Int> {
        return numbers.sortedWith { a, b -> if (rules.contains(a to b)) -1 else 1 }
    }
    
    private fun parse(input: String): Pair<List<Pair<Int, Int>>, List<List<Int>>> {
        val (rulesSection, numbersSection) = input.split("\n\n")
        val rules = rulesSection.lines()
            .mapNotNull { line -> """(\d{2})\|(\d{2})""".toRegex().matchEntire(line) }
            .map { match -> match.groups[1]?.value?.toInt()!! to match.groups[2]?.value?.toInt()!! }
        val numbers = numbersSection.lines().map { line -> line.split(',').map { it.toInt() } }
        return rules to numbers
    }
      
      
  • Haskell

    Part two was actually much easier than I thought it was!

     undefined
        
    import Control.Arrow
    import Data.Bool
    import Data.List
    import Data.List.Split
    import Data.Maybe
    
    readInput :: String -> ([(Int, Int)], [[Int]])
    readInput = (readRules *** readUpdates . tail) . break null . lines
      where
        readRules = map $ (read *** read . tail) . break (== '|')
        readUpdates = map $ map read . splitOn ","
    
    mid = (!!) <*> ((`div` 2) . length)
    
    isSortedBy rules = (`all` rules) . match
      where
        match ps (x, y) = fromMaybe True $ (<) <$> elemIndex x ps <*> elemIndex y ps
    
    pageOrder rules = curry $ bool GT LT . (`elem` rules)
    
    main = do
      (rules, updates) <- readInput <$> readFile "input05"
      let (part1, part2) = partition (isSortedBy rules) updates
      mapM_ (print . sum . map mid) [part1, sortBy (pageOrder rules) <$> part2]
    
      
  • Dart

    A bit easier than I first thought it was going to be.

    I had a look at the Uiua discussion, and this one looks to be beyond my pay grade, so this will be it for today.

     undefined
        
    import 'package:collection/collection.dart';
    import 'package:more/more.dart';
    
    (int, int) solve(List<String> lines) {
      var parts = lines.splitAfter((e) => e == '');
      var pred = SetMultimap.fromEntries(parts.first.skipLast(1).map((e) {
        var ps = e.split('|').map(int.parse);
        return MapEntry(ps.last, ps.first);
      }));
      ordering(a, b) => pred[a].contains(b) ? 1 : 0;
    
      var pageSets = parts.last.map((e) => e.split(',').map(int.parse).toList());
      var partn = pageSets.partition((ps) => ps.isSorted(ordering));
      return (
        partn.truthy.map((e) => e[e.length ~/ 2]).sum,
        partn.falsey.map((e) => (e..sort(ordering))[e.length ~/ 2]).sum
      );
    }
    
    part1(List<String> lines) => solve(lines).$1;
    part2(List<String> lines) => solve(lines).$2;
    
    
      
  • Rust

    While part 1 was pretty quick, part 2 took me a while to figure something out. I figured that the relation would probably be a total ordering, and obtained the actual order using topological sorting. But it turns out the relation has cycles, so the topological sort must be limited to the elements that actually occur in the lists.

    also on github

  • I was very much unhappy because my previous implementation took 1 second to execute and trashed through 2GB RAM in the process of doing so, I sat down again with some inspiration about the sorting approach.
    I am very much happy now, the profiler tells me that most of time is spend in the parsing functions now.

    I am also grateful for everyone else doing haskell, this way I learned about Arrays, Bifunctors and Arrows which (I think) improved my code a lot.

    Haskell

     haskell
        
    import Control.Arrow hiding (first, second)
    
    import Data.Map (Map)
    import Data.Set (Set)
    import Data.Bifunctor
    
    import qualified Data.Maybe as Maybe
    import qualified Data.List as List
    import qualified Data.Map as Map
    import qualified Data.Set as Set
    import qualified Data.Ord as Ord
    
    
    parseRule :: String -> (Int, Int)
    parseRule s = (read . take 2 &&& read . drop 3) s
    
    replace t r c = if t == c then r else c
    
    parse :: String -> (Map Int (Set Int), [[Int]])
    parse s = (map parseRule >>> buildRuleMap $ rules, map (map read . words) updates)
            where
                    rules = takeWhile (/= "") . lines $ s
                    updates = init . map (map (replace ',' ' ')) . drop 1 . dropWhile (/= "") . lines $ s
    
    middleElement :: [a] -> a
    middleElement us = (us !!) $ (length us `div` 2)
    
    ruleGroup :: Eq a => (a, b) -> (a, b') -> Bool
    ruleGroup = curry (uncurry (==) <<< fst *** fst)
    
    buildRuleMap :: [(Int, Int)] -> Map Int (Set Int)
    buildRuleMap rs = List.sortOn fst
            >>> List.groupBy ruleGroup 
            >>> map ((fst . head) &&& map snd) 
            >>> map (second Set.fromList) 
            >>> Map.fromList 
            $ rs
    
    elementSort :: Map Int (Set Int) -> Int -> Int -> Ordering 
    elementSort rs a b
            | Maybe.maybe False (Set.member b) (rs Map.!? a) = LT
            | Maybe.maybe False (Set.member a) (rs Map.!? b) = GT
            | otherwise = EQ
    
    isOrdered rs u = (List.sortBy (elementSort rs) u) == u
    
    part1 (rs, us) = filter (isOrdered rs)
            >>> map middleElement
            >>> sum
            $ us
    part2 (rs, us) = filter (isOrdered rs >>> not)
            >>> map (List.sortBy (elementSort rs))
            >>> map middleElement
            >>> sum
            $ us
    
    main = getContents >>= print . (part1 &&& part2) . parse
    
      
  • Haskell

    I should probably have used sortBy instead of this ad-hoc selection sort.

     haskell
        
    import Control.Arrow
    import Control.Monad
    import Data.Char
    import Data.List qualified as L
    import Data.Map
    import Data.Set
    import Data.Set qualified as S
    import Text.ParserCombinators.ReadP
    
    parse = (,) <$> (fromListWith S.union <$> parseOrder) <*> (eol *> parseUpdate)
    parseOrder = endBy (flip (,) <$> (S.singleton <$> parseInt <* char '|') <*> parseInt) eol
    parseUpdate = endBy (sepBy parseInt (char ',')) eol
    parseInt = read <$> munch1 isDigit
    eol = char '\n'
    
    verify :: Map Int (Set Int) -> [Int] -> Bool
    verify m = and . (zipWith fn <*> scanl (flip S.insert) S.empty)
      where
        fn a = flip S.isSubsetOf (findWithDefault S.empty a m)
    
    getMiddle = ap (!!) ((`div` 2) . length)
    
    part1 m = sum . fmap getMiddle
    
    getOrigin :: Map Int (Set Int) -> Set Int -> Int
    getOrigin m l = head $ L.filter (S.disjoint l . preds) (S.toList l)
      where
        preds = flip (findWithDefault S.empty) m
    
    order :: Map Int (Set Int) -> Set Int -> [Int]
    order m s
      | S.null s = []
      | otherwise = h : order m (S.delete h s)
        where
          h = getOrigin m s
    
    part2 m = sum . fmap (getMiddle . order m . S.fromList)
    
    main = getContents >>= print . uncurry runParts . fst . last . readP_to_S parse
    runParts m = L.partition (verify m) >>> (part1 m *** part2 m)
    
      
  • Nim

     nim
        
    import ../aoc, strutils, sequtils, tables
    
    type
      Rules = ref Table[int, seq[int]]
    
    #check if an update sequence is valid
    proc valid(update:seq[int], rules:Rules):bool =
      for pi, p in update:
        for r in rules.getOrDefault(p):
          let ri = update.find(r)
          if ri != -1 and ri < pi:
            return false
      return true
    
    proc backtrack(p:int, index:int, update:seq[int], rules: Rules, sorted: var seq[int]):bool =
      if index == 0:
        sorted[index] = p
        return true
      
      for r in rules.getOrDefault(p):
        if r in update and r.backtrack(index-1, update, rules, sorted):
          sorted[index] = p
          return true
      
      return false
    
    #fix an invalid sequence
    proc fix(update:seq[int], rules: Rules):seq[int] =
      echo "fixing", update
      var sorted = newSeqWith(update.len, 0);
      for p in update:
        if p.backtrack(update.len-1, update, rules, sorted):
          return sorted
      return @[]
    
    proc solve*(input:string): array[2,int] =
      let parts = input.split("\r\n\r\n");
      
      let rulePairs = parts[0].splitLines.mapIt(it.strip.split('|').map(parseInt))
      let updates = parts[1].splitLines.mapIt(it.split(',').map(parseInt))
      
      # fill rules table
      var rules = new Rules
      for rp in rulePairs:
        if rules.hasKey(rp[0]):
          rules[rp[0]].add rp[1];
        else:
          rules[rp[0]] = @[rp[1]]
          
      # fill reverse rules table
      var backRules = new Rules
      for rp in rulePairs:
        if backRules.hasKey(rp[1]):
          backRules[rp[1]].add rp[0];
        else:
          backRules[rp[1]] = @[rp[0]]
      
      for u in updates:
        if u.valid(rules):
          result[0] += u[u.len div 2]
        else:
          let uf = u.fix(backRules)
          result[1] += uf[uf.len div 2]
    
      

    I thought of doing a sort at first, but dismissed it for some reason, so I came up with this slow and bulky recursive backtracking thing which traverses the rules as a graph until it reaches a depth equal to the given sequence. Not my finest work, but it does solve the puzzle :)

  • Haskell

    It's more complicated than it needs to be, could've done the first part just like the second.
    Also it takes one second (!) to run it .-.

     haskell
        
    import Data.Maybe as Maybe
    import Data.List as List
    import Control.Arrow hiding (first, second)
    
    parseRule :: String -> (Int, Int)
    parseRule s = (read . take 2 &&& read . drop 3) s
    
    replace t r c = if t == c then r else c
    
    parse :: String -> ([(Int, Int)], [[Int]])
    parse s = (map parseRule rules, map (map read . words) updates)
            where
                    rules = takeWhile (/= "") . lines $ s
                    updates = init . map (map (replace ',' ' ')) . drop 1 . dropWhile (/= "") . lines $ s
    
    validRule (pairLeft, pairRight) (ruleLeft, ruleRight)
            | pairLeft == ruleRight && pairRight == ruleLeft = False
            | otherwise = True
    
    validatePair rs p = all (validRule p) rs
    
    validateUpdate rs u = all (validatePair rs) pairs
            where 
                    pairs = List.concatMap (\ t -> map (head t, ) (tail t)) . filter (length >>> (> 1)) . tails $ u
    
    middleElement :: [a] -> a
    middleElement us = (us !!) $ (length us `div` 2)
    
    part1 (rs, us) = sum . map (middleElement) . filter (validateUpdate rs) $ us
    
    insertOrderly rs i is = insertOrderly' frontRules i is
            where
                    frontRules = filter (((== i) . fst)) rs
    
    insertOrderly' _  i [] = [i]
    insertOrderly' rs i (i':is)
            | any (snd >>> (== i')) rs = i : i' : is
            | otherwise = i' : insertOrderly' rs i is
    
    part2 (rs, us) = sum . map middleElement . Maybe.mapMaybe ((orderUpdate &&& id) >>> \ p -> if (fst p /= snd p) then Just $ fst p else Nothing) $ us
            where
                    orderUpdate = foldr (insertOrderly rs) []
    
    main = getContents >>= print . (part1 &&& part2) . parse
    
      
  • Well, this one ended up with a surprisingly easy part 2 with how I wrote it.
    Not the most computationally optimal code, but since they're still cheap enough to run in milliseconds I'm not overly bothered.

  • Python

    Also took advantage of cmp_to_key.

     python
        
    from functools import cmp_to_key
    from pathlib import Path
    
    
    def parse_input(input: str) -> tuple[dict[int, list[int]], list[list[int]]]:
        rules, updates = tuple(input.strip().split("\n\n")[:2])
        order = {}
        for entry in rules.splitlines():
            values = entry.split("|")
            order.setdefault(int(values[0]), []).append(int(values[1]))
        updates = [[int(v) for v in u.split(",")] for u in updates.splitlines()]
        return (order, updates)
    
    
    def is_ordered(update: list[int], order: dict[int, list[int]]) -> bool:
        return update == sorted(
            update, key=cmp_to_key(lambda a, b: 1 if a in order.setdefault(b, []) else -1)
        )
    
    
    def part_one(input: str) -> int:
        order, updates = parse_input(input)
        return sum([u[len(u) // 2] for u in (u for u in updates if is_ordered(u, order))])
    
    
    def part_two(input: str) -> int:
        order, updates = parse_input(input)
        return sum(
            [
                sorted(u, key=cmp_to_key(lambda a, b: 1 if a in order[b] else -1))[
                    len(u) // 2
                ]
                for u in (u for u in updates if not is_ordered(u, order))
            ]
        )
    
    
    if __name__ == "__main__":
        input = Path("input").read_text("utf-8")
        print(part_one(input))
        print(part_two(input))
    
      
  • Python

    sort using a compare function

     undefined
        
    from math import floor
    from pathlib import Path
    from functools import cmp_to_key
    cwd = Path(__file__).parent
    
    def parse_protocol(path):
    
      with path.open("r") as fp:
        data = fp.read().splitlines()
    
      rules = data[:data.index('')]
      page_to_rule = {r.split('|')[0]:[] for r in rules}
      [page_to_rule[r.split('|')[0]].append(r.split('|')[1]) for r in rules]
    
      updates = list(map(lambda x: x.split(','), data[data.index('')+1:]))
    
      return page_to_rule, updates
    
    def sort_pages(pages, page_to_rule):
    
      compare_pages = lambda page1, page2:\
        0 if page1 not in page_to_rule or page2 not in page_to_rule[page1] else -1
    
      return sorted(pages, key = cmp_to_key(compare_pages))
    
    def solve_problem(file_name, fix):
    
      page_to_rule, updates = parse_protocol(Path(cwd, file_name))
    
      to_print = [temp_p[int(floor(len(pages)/2))] for pages in updates
                  if (not fix and (temp_p:=pages) == sort_pages(pages, page_to_rule))
                  or (fix and (temp_p:=sort_pages(pages, page_to_rule)) != pages)]
    
      return sum(map(int,to_print))
    
      
  • J

    This is a problem where J's biases lead one to a very different solution from most of the others. The natural representation of a directed graph in J is an adjacency matrix, and sorting is specified in terms of a permutation to apply rather than in terms of a comparator: x /: y (respectively x \: y) determines the permutation that would put y in ascending (descending) order, then applies that permutation to x.

     undefined
        
    data_file_name =: '5.data'
    lines =: cutopen fread data_file_name
    NB. manuals start with the first line where the index of a comma is &lt; 5
    start_of_manuals =: 1 i.~ 5 > ',' i.~"1 > lines
    NB. ". can't parse the | so replace it with a space
    edges =: ". (' ' &amp; (2}))"1 > start_of_manuals {. lines
    NB. don't unbox and parse yet because they aren't all the same length
    manuals =: start_of_manuals }. lines
    max_page =: >./ , edges
    NB. adjacency matrix of the page partial ordering; e.i. makes identity matrix
    adjacency =: 1 (&lt; edges)} e. i. >: max_page
    NB. ordered line is true if line is ordered according to the adjacency matrix
    ordered =: monad define
       pages =. ". > y
       NB. index pairs 0 &lt;: i &lt; j &lt; n; box and raze to avoid array fill
       page_pairs =. ; (&lt; @: (,~"0 i.)"0) i. # pages
       */ adjacency {~ &lt;"1 pages {~ page_pairs
    )
    midpoint =: ({~ (&lt;. @: -: @: #)) @: ". @: >
    result1 =: +/ (ordered"0 * midpoint"0) manuals
    
    NB. toposort line yields the pages of line topologically sorted by adjacency
    NB. this is *not* a general topological sort but works for our restricted case:
    NB. we know that each individual manual will be totally ordered
    toposort =: monad define
       pages =. ". > y
       NB. for each page, count the pages which come after it, then sort descending
       pages \: +/"1 adjacency {~ &lt;"1 pages ,"0/ pages
    )
    NB. midpoint2 doesn't parse, but does remove trailing zeroes
    midpoint2 =: ({~ (&lt;. @: -: @: #)) @: ({.~ (i. &amp; 0))
    result2 =: +/ (1 - ordered"0 manuals) * midpoint2"1 toposort"0 manuals
    
      
  • Kotlin

    That was an easy one, once you define a comparator function. (At least when you have a sorting function in your standard-library.) The biggest part was the parsing. lol

     undefined
        
    import kotlin.text.Regex
    
    fun main() {
        fun part1(input: List<String>): Int = parseInput(input).sumOf { if (it.isCorrectlyOrdered()) it[it.size / 2].pageNumber else 0 }
    
        fun part2(input: List<String>): Int = parseInput(input).sumOf { if (!it.isCorrectlyOrdered()) it.sorted()[it.size / 2].pageNumber else 0 }
    
        val testInput = readInput("Day05_test")
        check(part1(testInput) == 143)
        check(part2(testInput) == 123)
    
        val input = readInput("Day05")
        part1(input).println()
        part2(input).println()
    }
    
    fun parseInput(input: List<String>): List<List<Page>> {
        val (orderRulesStrings, pageSequencesStrings) = input.filter { it.isNotEmpty() }.partition { Regex("""\d+\|\d+""").matches(it) }
    
        val orderRules = orderRulesStrings.map { with(it.split('|')) { this[0].toInt() to this[1].toInt() } }
        val orderRulesX = orderRules.map { it.first }.toSet()
        val pages = orderRulesX.map { pageNumber ->
            val orderClasses = orderRules.filter { it.first == pageNumber }.map { it.second }
            Page(pageNumber, orderClasses)
        }.associateBy { it.pageNumber }
    
        val pageSequences = pageSequencesStrings.map { sequenceString ->
            sequenceString.split(',').map { pages[it.toInt()] ?: Page(it.toInt(), emptyList()) }
        }
    
        return pageSequences
    }
    
    /*
     * An order class is an equivalence class for every page with the same page to be printed before.
     */
    data class Page(val pageNumber: Int, val orderClasses: List<Int>): Comparable<Page> {
        override fun compareTo(other: Page): Int =
            if (other.pageNumber in orderClasses) -1
            else if (pageNumber in other.orderClasses) 1
            else 0
    }
    
    fun List<Page>.isCorrectlyOrdered(): Boolean = this == this.sorted()
    
     
      
  • Smalltalk

    parsing logic is duplicated between the two, and I probably could use part2's logic for part 1, but yeah

    part 1

     smalltalk
        
    day5p1: in
        | rules pages i j input |
    
        input := in lines.
        i := input indexOf: ''.
        rules := ((input copyFrom: 1 to: i-1) collect: [:l | (l splitOn: '|') collect: #asInteger]).
        pages := (input copyFrom: i+1 to: input size) collect: [:l | (l splitOn: ',') collect: #asInteger].
        
        ^ pages sum: [ :p |
            (rules allSatisfy: [ :rule |
                i := p indexOf: (rule at: 1).
                j := p indexOf: (rule at: 2).
                (i ~= 0 & (j ~= 0)) ifTrue: [ i < j ] ifFalse: [ true ]
            ])
                ifTrue: [p at: ((p size / 2) round: 0) ]
                ifFalse: [0].
        ]
    
    
      

    part 2

     smalltalk
        
    day5p2: in
        | rules pages i pnew input |
    
        input := in lines.
        i := input indexOf: ''.
        rules := ((input copyFrom: 1 to: i-1) collect: [:l | (l splitOn: '|') collect: #asInteger]).
        pages := (input copyFrom: i+1 to: input size) collect: [:l | (l splitOn: ',') collect: #asInteger].
        
        ^ pages sum: [ :p |
            pnew := p sorted: [ :x :y | 
                rules anySatisfy: [ :r | (r at: 1) = x and: [ (r at: 2) = y]]
            ].
            pnew ~= p
                ifTrue: [ pnew at: ((pnew size / 2) round: 0) ]
                ifFalse: [0].
        ]
    
      
  • Python

    (Part 1) omg I can't believe this actually worked first try!

     python
        
    with open('input') as data:
        parts = data.read().rstrip().split("\n\n")
        ordering_rules = parts[0].split("\n")
        updates = parts[1].split("\n")
    
    correct_updates = []
    middle_updates = []
    
    def find_relevant_rules(pg_num: str, rules: list[str]) -> list[str] | None:
        for rule in rules:
            return list(filter(lambda x: x.split("|")[0] == pg_num, rules))
    
    def interpret_rule(rule: str) -> list[str]:
        return rule.split("|")
    
    def interpret_update(update: str) -> list[str]:
        return update.split(",")
    
    def find_middle_update_index(update: list[str]) -> int:
        num_of_elements = len(update)
        return num_of_elements // 2
    
    for update in updates:
        is_correct = True
        for i, page in enumerate(interpret_update(update)):
           rules_to_check = find_relevant_rules(page, ordering_rules) 
           for rule in rules_to_check:
               if rule.split("|")[1] in interpret_update(update)[:i]:
                   is_correct = False
        if is_correct:
            correct_updates.append(update)
    
    for update in correct_updates:
        split_update = update.split(",")
        middle_updates.append(int(split_update[find_middle_update_index(split_update)]))
    print(sum(middle_updates))
    
      
  • Uiua

    This is the first one that caused me some headache because I didn't read the instructions carefully enough.
    I kept trying to create a sorted list for when all available pages were used, which got me stuck in an endless loop.

    Another fun part was figuring out to use memberof (∈) instead of find (⌕) in the last line of FindNext. So much time spent on debugging other areas of the code

    Run with example input here

     uiua
        
    FindNext ← ⊙(
      ⊡1⍉,
      ⊃▽(▽¬)⊸∈
      ⊙⊙(⊡0⍉.)
      :⊙(⟜(▽¬∈))
    )
    
    # find the order of pages for a given set of rules
    FindOrder ← (
      ◴♭.
      []
      ⍢(⊂FindNext|⋅(>1⧻))
      ⊙◌⊂
    )
    
    PartOne ← (
      &rs ∞ &fo "input-5.txt"
      ∩°□°⊟⊜□¬⌕"\n\n".
      ⊙(⊜(□⊜⋕≠@,.)≠@\n.↘1)
      ⊜(⊜⋕≠@|.)≠@\n.
    
      ⊙.
      ¤
      ⊞(◡(°□:)
        ⟜:⊙(°⊟⍉)
        =2+∩∈
        ▽
        FindOrder
        ⊸≍°□:
        ⊙◌
      )
      ≡◇(⊡⌊÷2⧻.)▽♭
      /+
    )
    
    PartTwo ← (
      &rs ∞ &fo "input-5.txt"
      ∩°□°⊟⊜□¬⌕"\n\n".
      ⊙(⊜(□⊜⋕≠@,.)≠@\n.↘1)
      ⊜(⊜⋕≠@|.)≠@\n.
      ⊙.
      ⍜¤⊞(
        ◡(°□:)
        ⟜:⊙(°⊟⍉)
        =2+∩∈
        ▽
        FindOrder
        ⊸≍°□:
        ⊟∩□
      )
      ⊙◌
      ⊃(⊡0)(⊡1)⍉
      ≡◇(⊡⌊÷2⧻.)▽¬≡°□
      /+
    )
    
    &p "Day 5:"
    &pf "Part 1: "
    &p PartOne
    &pf "Part 2: "
    &p PartTwo
    
      
  • I've got a "smart" solution and a really dumb one. I'll start with the smart one (incomplete but you can infer). I did four different ways to try to get it faster, less memory, etc.

     csharp
        
    // this is from a nuget package. My Mathy roommate told me this was a topological sort.
    // It's also my preferred, since it'd perform better on larger data sets.
    return lines
        .AsParallel()
        .Where(line => !IsInOrder(GetSoonestOccurrences(line), aggregateRules))
        .Sum(line => line.StableOrderTopologicallyBy(
                getDependencies: page =>
                    aggregateRules.TryGetValue(page, out var mustPreceed) ? mustPreceed.Intersect(line) : Enumerable.Empty<Page>())
            .Middle()
        );
    
      

    The dumb solution. These comparisons aren't fully transitive. I can't believe it works.

     csharp
        
    public static SortedSet<Page> Sort3(Page[] line,
        Dictionary<Page, System.Collections.Generic.HashSet<Page>> rules)
    {
        // how the hell is this working?
        var sorted = new SortedSet<Page>(new Sort3Comparer(rules));
        foreach (var page in line)
            sorted.Add(page);
        return sorted;
    }
    
    public static Page[] OrderBy(Page[] line, Dictionary<Page, System.Collections.Generic.HashSet<Page>> rules)
    {
        return line.OrderBy(identity, new Sort3Comparer(rules)).ToArray();
    }
    
    sealed class Sort3Comparer : IComparer<Page>
    {
        private readonly Dictionary<Page, System.Collections.Generic.HashSet<Page>> _rules;
    
        public Sort3Comparer(Dictionary<Page, System.Collections.Generic.HashSet<Page>> rules) => _rules = rules;
    
        public int Compare(Page x, Page y)
        {
            if (_rules.TryGetValue(x, out var xrules))
            {
                if (xrules.Contains(y))
                    return -1;
            }
    
            if (_rules.TryGetValue(y, out var yrules))
            {
                if (yrules.Contains(x))
                    return 1;
            }
    
            return 0;
        }
    }
    
      
    Method Mean Error StdDev Gen0 Gen1 Allocated
    Part2_UsingList (literally just Insert) 660.3 us 12.87 us 23.20 us 187.5000 35.1563 1144.86 KB
    Part2_TrackLinkedList (wrong now) 1,559.7 us 6.91 us 6.46 us 128.9063 21.4844 795.03 KB
    Part2_TopologicalSort 732.3 us 13.97 us 16.09 us 285.1563 61.5234 1718.36 KB
    Part2_SortedSet 309.1 us 4.13 us 3.45 us 54.1992 10.2539 328.97 KB
    Part2_OrderBy 304.5 us 6.09 us 9.11 us 48.8281 7.8125 301.29 KB
  • Rust

    Used a sorted/unsorted comparison to solve the first part, the second part was just filling out the else branch.

     undefined
        
    use std::{
        cmp::Ordering,
        collections::HashMap,
        io::{BufRead, BufReader},
    };
    
    fn main() {
        let mut lines = BufReader::new(std::fs::File::open("input.txt").unwrap()).lines();
    
        let mut rules: HashMap<u64, Vec<u64>> = HashMap::default();
    
        for line in lines.by_ref() {
            let line = line.unwrap();
    
            if line.is_empty() {
                break;
            }
    
            let lr = line
                .split('|')
                .map(|el| el.parse::<u64>())
                .collect::<Result<Vec<u64>, _>>()
                .unwrap();
    
            let left = lr[0];
            let right = lr[1];
    
            if let Some(values) = rules.get_mut(&left) {
                values.push(right);
                values.sort();
            } else {
                rules.insert(left, vec![right]);
            }
        }
    
        let mut updates: Vec<Vec<u64>> = Vec::default();
    
        for line in lines {
            let line = line.unwrap();
    
            let update = line
                .split(',')
                .map(|el| el.parse::<u64>())
                .collect::<Result<Vec<u64>, _>>()
                .unwrap();
    
            updates.push(update);
        }
    
        let mut middle_sum = 0;
        let mut fixed_middle_sum = 0;
    
        for update in updates {
            let mut update_sorted = update.clone();
            update_sorted.sort_by(|a, b| {
                if let Some(rules) = rules.get(a) {
                    if rules.contains(b) {
                        Ordering::Less
                    } else {
                        Ordering::Equal
                    }
                } else {
                    Ordering::Equal
                }
            });
    
            if update.eq(&update_sorted) {
                let middle = update[(update.len() - 1) / 2];
                middle_sum += middle;
            } else {
                let middle = update_sorted[(update_sorted.len() - 1) / 2];
                fixed_middle_sum += middle;
            }
        }
    
        println!("part1: {} part2: {}", middle_sum, fixed_middle_sum);
    }
    
      
  • Rust

    I don't love this code, but I didn't initially use a hashmap and it runs so fast it wasn't worth the time to refactor.

     rust
        
    use std::{cmp::Ordering, fs, str::FromStr};
    
    use color_eyre::eyre::{Report, Result};
    use itertools::Itertools;
    
    struct Updates(Vec<Vec<isize>>);
    
    impl FromStr for Updates {
        type Err = Report;
    
        fn from_str(s: &str) -> Result<Self, Self::Err> {
            let pages = s
                .lines()
                .map(|l| l.split(",").map(|n| n.parse::<isize>()).collect())
                .collect::<Result<_, _>>()?;
            Ok(Self(pages))
        }
    }
    
    impl Updates {
        fn get_valid(&self, rules: &OrderingRules) -> Self {
            let pages = self
                .0
                .clone()
                .into_iter()
                .filter(|p| rules.validate(p))
                .collect();
            Self(pages)
        }
    
        fn get_invalid(&self, rules: &OrderingRules) -> Self {
            let pages = self
                .0
                .clone()
                .into_iter()
                .filter(|p| !rules.validate(p))
                .collect();
            Self(pages)
        }
    }
    
    struct OrderingRules(Vec<(isize, isize)>);
    
    impl OrderingRules {
        fn validate(&self, pnums: &Vec<isize>) -> bool {
            self.0.iter().all(|(a, b)| {
                let Some(a_pos) = pnums.iter().position(|&x| x == *a) else {
                    return true;
                };
                let Some(b_pos) = pnums.iter().position(|&x| x == *b) else {
                    return true;
                };
                a_pos < b_pos
            })
        }
    
        fn fix(&self, pnums: &Vec<isize>) -> Vec<isize> {
            let mut v = pnums.clone();
    
            v.sort_by(|a, b| {
                let mut fr = self
                    .0
                    .iter()
                    .filter(|(ra, rb)| (ra == a || ra == b) && (rb == a || rb == b));
                if let Some((ra, _rb)) = fr.next() {
                    if ra == a {
                        Ordering::Less
                    } else {
                        Ordering::Greater
                    }
                } else {
                    Ordering::Equal
                }
            });
            v
        }
    }
    
    impl FromStr for OrderingRules {
        type Err = Report;
    
        fn from_str(s: &str) -> Result<Self, Self::Err> {
            let v = s
                .lines()
                .map(|l| {
                    l.splitn(2, "|")
                        .map(|n| n.parse::<isize>().unwrap())
                        .collect_tuple()
                        .ok_or_else(|| Report::msg("Rules need two items"))
                })
                .collect::<Result<_, _>>()?;
            Ok(Self(v))
        }
    }
    
    fn parse(s: &str) -> Result<(OrderingRules, Updates)> {
        let parts: Vec<_> = s.splitn(2, "\n\n").collect();
        let rules = OrderingRules::from_str(parts[0])?;
        let updates = Updates::from_str(parts[1])?;
        Ok((rules, updates))
    }
    
    fn part1(filepath: &str) -> Result<isize> {
        let input = fs::read_to_string(filepath)?;
        let (rules, updates) = parse(&input)?;
        let res = updates
            .get_valid(&rules)
            .0
            .iter()
            .map(|v| v[v.len() / 2])
            .sum();
        Ok(res)
    }
    
    fn part2(filepath: &str) -> Result<isize> {
        let input = fs::read_to_string(filepath)?;
        let (rules, updates) = parse(&input)?;
        let res = updates
            .get_invalid(&rules)
            .0
            .iter()
            .map(|v| rules.fix(&v))
            .map(|v| v[v.len() / 2])
            .sum();
        Ok(res)
    }
    
    fn main() -> Result<()> {
        color_eyre::install()?;
    
        println!("Part 1: {}", part1("d05/input.txt")?);
        println!("Part 2: {}", part2("d05/input.txt")?);
        Ok(())
    }
    
      
  • Zig

     zig
        
    const std = @import("std");
    const List = std.ArrayList;
    const Map = std.AutoHashMap;
    
    const tokenizeScalar = std.mem.tokenizeScalar;
    const splitScalar = std.mem.splitScalar;
    const parseInt = std.fmt.parseInt;
    const print = std.debug.print;
    const contains = std.mem.containsAtLeast;
    const eql = std.mem.eql;
    
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    const alloc = gpa.allocator();
    
    const Answer = struct {
        middle_sum: i32,
        reordered_sum: i32,
    };
    
    pub fn solve(input: []const u8) !Answer {
        var rows = splitScalar(u8, input, '\n');
    
        // key is a page number and value is a
        // list of pages to be printed before it
        var rules = Map(i32, List(i32)).init(alloc);
        var pages = List([]i32).init(alloc);
        defer {
            var iter = rules.iterator();
            while (iter.next()) |rule| {
                rule.value_ptr.deinit();
            }
            rules.deinit();
            pages.deinit();
        }
    
        var parse_rules = true;
        while (rows.next()) |row| {
            if (eql(u8, row, "")) {
                parse_rules = false;
                continue;
            }
    
            if (parse_rules) {
                var rule_pair = tokenizeScalar(u8, row, '|');
                const rule = try rules.getOrPut(try parseInt(i32, rule_pair.next().?, 10));
                if (!rule.found_existing) {
                    rule.value_ptr.* = List(i32).init(alloc);
                }
                try rule.value_ptr.*.append(try parseInt(i32, rule_pair.next().?, 10));
            } else {
                var page = List(i32).init(alloc);
                var page_list = tokenizeScalar(u8, row, ',');
                while (page_list.next()) |list| {
                    try page.append(try parseInt(i32, list, 10));
                }
                try pages.append(try page.toOwnedSlice());
            }
        }
    
        var middle_sum: i32 = 0;
        var reordered_sum: i32 = 0;
    
        var wrong_order = false;
        for (pages.items) |page| {
            var index: usize = page.len - 1;
            while (index > 0) : (index -= 1) {
                var page_rule = rules.get(page[index]) orelse continue;
    
                // check the rest of the pages
                var remaining: usize = 0;
                while (remaining < page[0..index].len) {
                    if (contains(i32, page_rule.items, 1, &[_]i32{page[remaining]})) {
                        // re-order the wrong page
                        const element = page[remaining];
                        page[remaining] = page[index];
                        page[index] = element;
                        wrong_order = true;
    
                        if (rules.get(element)) |next_rule| {
                            page_rule = next_rule;
                        }
    
                        continue;
                    }
                    remaining += 1;
                }
            }
            if (wrong_order) {
                reordered_sum += page[(page.len - 1) / 2];
                wrong_order = false;
            } else {
                // middle page number
                middle_sum += page[(page.len - 1) / 2];
            }
        }
        return Answer{ .middle_sum = middle_sum, .reordered_sum = reordered_sum };
    }
    
    pub fn main() !void {
        const answer = try solve(@embedFile("input.txt"));
        print("Part 1: {d}\n", .{answer.middle_sum});
        print("Part 2: {d}\n", .{answer.reordered_sum});
    }
    
    test "test input" {
        const answer = try solve(@embedFile("test.txt"));
        try std.testing.expectEqual(143, answer.middle_sum);
        try std.testing.expectEqual(123, answer.reordered_sum);
    }
    
    
      
  • Java

    Part 2 was an interesting one and my solution kinda feels like cheating. What I did I only changed the validation method from part 1 to return the indexes of incorrectly placed pages and then randomly swapped those around in a loop until the validation passed. I was expecting this to not work at all or take forever to run but surprisingly it only takes three to five seconds to complete.

     undefined
        
    import java.io.IOException;
    import java.nio.charset.StandardCharsets;
    import java.nio.file.Files;
    import java.nio.file.Path;
    import java.util.ArrayList;
    import java.util.Arrays;
    import java.util.Collections;
    import java.util.HashSet;
    import java.util.List;
    import java.util.Random;
    import java.util.Set;
    import java.util.stream.Collectors;
    
    public class Day05 {
        private static final Random random = new Random();
    
        public static void main(final String[] args) throws IOException {
            final String input = Files.readString(Path.of("2024\\05\\input.txt"), StandardCharsets.UTF_8);
            final String[] inputSplit = input.split("[\r\n]{4,}");
    
            final List<PageOrderingRule> rules = Arrays.stream(inputSplit[0].split("[\r\n]+"))
                .map(row -> row.split("\\|"))
                .map(row -> new PageOrderingRule(Integer.parseInt(row[0]), Integer.parseInt(row[1])))
                .toList();
    
            final List<ArrayList<Integer>> updates = Arrays.stream(inputSplit[1].split("[\r\n]+"))
                .map(row -> row.split(","))
                .map(row -> Arrays.stream(row).map(Integer::parseInt).collect(Collectors.toCollection(ArrayList::new)))
                .toList();
    
            System.out.println("Part 1: " + updates.stream()
                .filter(update -> validate(update, rules).isEmpty())
                .mapToInt(update -> update.get(update.size() / 2))
                .sum()
            );
    
            System.out.println("Part 2: " + updates.stream()
                .filter(update -> !validate(update, rules).isEmpty())
                .map(update -> fixOrder(update, rules))
                .mapToInt(update -> update.get(update.size() / 2))
                .sum()
            );
        }
    
        private static Set<Integer> validate(final List<Integer> update, final List<PageOrderingRule> rules) {
            final Set<Integer> invalidIndexes = new HashSet<>();
    
            for (int i = 0; i < update.size(); i++) {
                final Integer integer = update.get(i);
                for (final PageOrderingRule rule : rules) {
                    if (rule.x == integer && update.contains(rule.y) && i > update.indexOf(rule.y)) {
                        invalidIndexes.add(i);
                    }
                    else if (rule.y == integer && update.contains(rule.x) && i < update.indexOf(rule.x)) {
                        invalidIndexes.add(i);
                    }
                }
            }
    
            return invalidIndexes;
        }
    
        private static List<Integer> fixOrder(final List<Integer> update, final List<PageOrderingRule> rules) {
            List<Integer> invalidIndexesList = new ArrayList<>(validate(update, rules));
    
            // Swap randomly until the validation passes
            while (!invalidIndexesList.isEmpty()) {
                Collections.swap(update, random.nextInt(invalidIndexesList.size()), random.nextInt(invalidIndexesList.size()));
                invalidIndexesList = new ArrayList<>(validate(update, rules));
            }
    
            return update;
        }
    
        private static record PageOrderingRule(int x, int y) {}
    }
    
      
    • That's insane, you just bruteforced it. It definitely would not work for larger arrays

  • Rust

    Kinda sorta got day 5 done on time.

     rust
        
    use std::cmp::Ordering;
    
    use crate::utils::{bytes_to_num, read_lines};
    
    pub fn solution1() {
        let mut lines = read_input();
        let rules = parse_rules(&mut lines);
    
        let middle_rules_sum = lines
            .filter_map(|line| {
                let line_nums = rule_line_to_list(&line);
                line_nums
                    .is_sorted_by(|&a, &b| is_sorted(&rules, (a, b)))
                    .then_some(line_nums[line_nums.len() / 2])
            })
            .sum::<usize>();
    
        println!("Sum of in-order middle rules = {middle_rules_sum}");
    }
    
    pub fn solution2() {
        let mut lines = read_input();
        let rules = parse_rules(&mut lines);
    
        let middle_rules_sum = lines
            .filter_map(|line| {
                let mut line_nums = rule_line_to_list(&line);
    
                (!line_nums.is_sorted_by(|&a, &b| is_sorted(&rules, (a, b)))).then(|| {
                    line_nums.sort_by(|&a, &b| {
                        is_sorted(&rules, (a, b))
                            .then_some(Ordering::Less)
                            .unwrap_or(Ordering::Greater)
                    });
    
                    line_nums[line_nums.len() / 2]
                })
            })
            .sum::<usize>();
    
        println!("Sum of middle rules = {middle_rules_sum}");
    }
    
    fn read_input() -> impl Iterator<Item = String> {
        read_lines("src/day5/input.txt")
    }
    
    fn parse_rules(lines: &mut impl Iterator<Item = String>) -> Vec<(usize, usize)> {
        lines
            .take_while(|line| !line.is_empty())
            .fold(Vec::new(), |mut rules, line| {
                let (a, b) = line.as_bytes().split_at(2);
                let a = bytes_to_num(a);
                let b = bytes_to_num(&b[1..]);
    
                rules.push((a, b));
    
                rules
            })
    }
    
    fn rule_line_to_list(line: &str) -> Vec<usize> {
        line.split(',')
            .map(|s| bytes_to_num(s.as_bytes()))
            .collect::<Vec<_>>()
    }
    
    fn is_sorted(rules: &[(usize, usize)], tuple: (usize, usize)) -> bool {
        rules.iter().any(|&r| r == tuple)
    }
    
    
      

    Reusing my bytes_to_num function from day 3 feels nice. Pretty fun challenge.

  • Julia

    No really proud of todays solution. Probably because I started too late today.

    I used a dictionary with the numbers that should be in front of any given number. Then I checked if they appear after that number. Part1 check. For part 2 I just hoped for the best that ordering it would work by switching each two problematic entries and it worked.

57 comments