for a while there I thought silksong had integrity and then I found the goddamn double jump
@rntz.net
Michael Arntzenius irl. Postdoc at UC Berkeley doing PL + DB + incremental computation. PL design, math, calligraphy, idle musings, &c. rntz.net π @rntz@recurse.social π¦ @arntzenius Attempting to use bsky more now that people are showing up.
for a while there I thought silksong had integrity and then I found the goddamn double jump
* 1e3b8d6 work * be68869 work * f78da1b work * 7f85b0b work * f145b04 Update on Overleaf. * 751966e work * 7e34a6c work * 9cbac47 work * a037082 work * ec99d61 work * b292429 work * 908f2fe work * f02ff83 work * 2b7dd67 work * e6d36af work * 570e91a work * c458a82 work * 78966a5 work * dbc0cd6 work * 09f6b18 work * 43e4d6c work * 0ec0ce8 work * bdcdadb rm sections.tex * 9f9e18e add sections.tex * 47f8674 work * aa3b55e work * 7ec0caf work
behold the quality of my commit messages
who says logic programming has to give up functional dependencies, though? it's quite common to declare FDs in databases, eg.
(but yes, you do have to deal with it most of the time; that is the nature of the game.)
Functional programming couples functional dependency (for fixed x, y there's at most one z = x + y) with input-output directionality (supply x,y to get z). Logic/constraint programming decouples them: what are the x, y such that x + y = 5?
1000xresist is a helluva game
It is easy to point out all the evil and misfortune in the world. It is also easy to believe in a good so abstract that everything must first be razed to the ground to realize it. Neither is sufficient. You must find something real to cherish, to treasure, to earnestly love, and you must expose it.
yes, this is a fine strategy. I was implementing it but it was still slower than repeatedly merging the 2 smallest. the culprit is almost certainly overhead from some weird project-specific iterators involved. but see also bsky.app/profile/rntz...
there was good discusson of this on mastodon. it seems that b/c of the sizes of the vectors (no two vector lengths are within a factor of 2), repeatedly merging the 2 smallest does a basically optimal # of comparisons in the worst case. cf this thread recurse.social/@rntz/116025...
these are not insoluble but they're why I don't want to parallelize this step yet, especially as it's not the bottleneck in my current benchmarks/workloads. right now I'm more interested in why my weird iterator abstraction is performing so badly...
problem 2: i want my merge to deduplicate as well, which means the index arithmetic now has a sequential left-to-right dependency - if there's 1 duplicate in the left half, the right half must be shifted left by 1 - so I can't just splat the results directly into one big output vector.
problem 1: partition-and-conquer is good in parallel but not great constant factors when single-threaded, so now you have a "how much to parallelize" problem.
I'm not trying to do things in parallel right now, but yes, that's a very good approach if you are trying to parallelize (eg www.cs.cmu.edu/~guyb/paralg... sec 4.4 or dl.acm.org/doi/10.1145/... sec 5 p21).
not that I know of? in my particular example there are no duplicates across them (if an element is in one array it isn't in any other), but I don't think that's important and I don't want to assume it in general.
in fact, no two array lengths are within a factor of 2 of each other. which I think makes repeatedly-merge-smallest unusually efficient compared with a more even distribution - at least half the elements see only one merge!
I think the culprit here is overhead from a weird project-specific iterator abstraction my N-way merge is using. e.g. when I change the 2-way merge to also use these iterators it gets slower than the N-way merge.
but the arrays are large and of widely differing lengths, eg. from 60 elts to 90M elts
I want to merge N sorted arrays in rust (for small N, eg N=8, but not statically known) and I'm having trouble actually making this faster than the dumb obvious thing of just repeatedly merging two of them until done, from small to large.
dreaming of a moderately fast query language
layer 0
layer 1
layer 2
the atreus keyboard layout I've landed on as a {emacs,mac,latex,lisp} user
I agree cars kill more on average, but your claim seems false? In spain in 2020 only 1,370 people died in reported traffic accidents = 3.753 persons / day. I guess it hinges on the area you average over: Spain, Europe, the world...
road-safety.transport.ec.europa.eu/document/dow...
a screenshot of "past puzzles" page from enclose.horse, showing my performance. the exact scores are censored out but days 13-21 are all diamond (perfect); before that is a mix of gold and occasional diamond.
are enclose.horse puzzles getting easier or am I just gitting gud?
the _extensions_ are good.
the _intensions_ are fucked!
It's well known that set intersection is associative. Unfortunately:
import Data.Set
n = 20_000_000
evens = fromList [0,2..n]
odds = fromList [1,3..n]
ends = fromList [0,n]
fast = evens `intersection` (odds `intersection` ends)
slow = (evens `intersection` odds) `intersection` ends
repost (boosting reach)
repost (moral agreement)
repost (aspirational)
repost (compliment)
repost (mockery)
repost (cat pic)
repost (i got rickrolled)
modal dialogs delendi sunt
there is a direct correlation between how anxious I get about <waves hands> and how many james hoffmann videos I watch
Sophic is to Sophomoric
as
Sapphic is to Sapphomoric,
obviously
More evidence supporting this really neat paper doi.org/10.1111/ajps... (JSTOR version: www.jstor.org/stable/45295...)
if this were false, would we know? if oral transmission is by definition un-*document*-ed, you'd need an oral culture in contact with a literate culture who cared to write down an oral recitation verbatim (not trivial to do! you can't write at speech speed, so errors are likely) multiple times.