~swapgs's Avatar

~swapgs

@swapgs.infosec.exchange.ap.brid.gy

zigzagging my way through cursed code and bugs [bridged from https://infosec.exchange/@swapgs on the fediverse by https://fed.brid.gy/ ]

37
Followers
1
Following
67
Posts
26.11.2024
Joined
Posts Following

Latest posts by ~swapgs @swapgs.infosec.exchange.ap.brid.gy

vulnerability age in curl

vulnerability age in curl

CVE-2026-3784 beat a new #curl record. This flaw existed in curl source code for 24.97 years before it was discovered.

Illustrated in the slightly hard-to-read graph below. The average age of a curl vulnerability when reported is eight years.

https://curl.se/docs/CVE-2026-3784.html

12.03.2026 08:08 👍 6 🔁 9 💬 2 📌 0
Post image

RIP FX - You are a legend

02.03.2026 05:03 👍 58 🔁 25 💬 6 📌 2

RE: https://infosec.exchange/@thezdi/116132615250461193

> The affected repo was removed

25.02.2026 18:22 👍 0 🔁 0 💬 0 📌 0
Preview
What Package Registries Could Borrow from OCI Every package manager ships code as an archive, and every one of them has a slightly different way to do it. npm wraps tarballs in a `package/` directory prefix. RubyGems nests gzipped files inside an uncompressed tar. Alpine concatenates three gzip streams and calls it a package. Python cycled through four distribution formats in twenty years. RPM used cpio as its payload format for nearly three decades before finally dropping it in 2025. Meanwhile, the container world converged on a single format: OCI, the Open Container Initiative spec. And over the past few years, OCI registries have quietly started storing things that aren’t containers at all: Helm charts, Homebrew bottles, WebAssembly modules, AI models. The format was designed for container images, but the underlying primitives turn out to be general enough that it’s worth asking whether every package manager could use OCI for distribution. ### What OCI actually is OCI defines three specifications: a Runtime Spec (how to run containers), an Image Spec (how to describe container contents), and a Distribution Spec (how to push and pull from registries). At the storage level, an OCI registry deals in two primitives: **manifests** and **blobs**. A manifest is a JSON document that references one or more blobs by their SHA-256 digest. A blob is an opaque chunk of binary content, and tags are human-readable names that point to manifests. A container image manifest looks like this: { "schemaVersion": 2, "mediaType": "application/vnd.oci.image.manifest.v1+json", "config": { "mediaType": "application/vnd.oci.image.config.v1+json", "digest": "sha256:abc123...", "size": 1234 }, "layers": [ { "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip", "digest": "sha256:def456...", "size": 56789 } ] } The config blob holds metadata (what OS, what architecture, what environment variables). Each layer blob holds a tarball of filesystem changes. The registry doesn’t care what’s inside the blobs, only that each one is identified and verified by its digest. The v1.1 update in February 2024 added `artifactType`, which declares what kind of thing a manifest describes so a registry can distinguish a Helm chart from a container image from a Homebrew bottle, and `subject`, which lets one artifact reference another and is how signatures and SBOMs get attached to the thing they describe. Before 1.1, people stored non-container artifacts by setting custom media types on the config blob, which worked but registries sometimes rejected or mishandled the results. To push an artifact, you upload each blob (to `/v2/<name>/blobs/uploads/`), then push a manifest that references those blobs by digest and size. To pull, you fetch the manifest, read the digests, and download the blobs. Because everything is addressed by digest, the registry only stores one copy of any given blob even if multiple artifacts reference it. ### Why OCI and not something purpose-built The format itself carries a lot of container-specific ceremony, but every major cloud provider already runs an OCI-compliant registry: GitHub Container Registry, Amazon ECR, Azure Container Registry, Google Artifact Registry. Self-hosted options like Harbor and Zot are mature. Authentication, access control, replication, and CDN-backed blob storage all exist because container registries already solved those problems at scale, and a package registry built on OCI inherits all of it without reimplementing any of it. ORAS (OCI Registry As Storage) is a CNCF project that abstracts the multi-step OCI upload process into simple commands: oras push registry.example.com/mypackage:1.0.0 \ package.tar.gz:application/vnd.example.package.v1.tar+gzip This uploads the file as a blob, creates a manifest referencing it, and tags it. Helm, Flux, Crossplane, and the Sigstore signing tools all use ORAS or the underlying OCI client libraries. ### What package managers ship today No individual choice here is wrong, but seventeen different answers to the same basic problem suggests the archive format was never the part anyone thought hard about. Ecosystem | Format | What’s inside ---|---|--- npm | `.tgz` (gzip tar) | Files under a `package/` prefix PyPI | `.whl` (zip) or `.tar.gz` | Wheel: pre-built files + `.dist-info`. Sdist: source + `PKG-INFO` RubyGems | `.gem` (tar of gzips) | `metadata.gz` + `data.tar.gz` + `checksums.yaml.gz` Maven | `.jar` (zip) | Compiled `.class` files + `META-INF/MANIFEST.MF` Cargo | `.crate` (gzip tar) | Source + `Cargo.toml` + `Cargo.lock` NuGet | `.nupkg` (zip) | DLL assemblies + `.nuspec` XML metadata Homebrew | `.bottle.tar.gz` | Compiled binaries under install prefix Go | `.zip` | Source under `module@version/` path prefix Hex | Outer tar of inner files | `VERSION` + `metadata.config` + `contents.tar.gz` + `CHECKSUM` Debian | `.deb` (ar archive) | `debian-binary` + `control.tar.*` + `data.tar.*` RPM | Custom binary format | Header sections + cpio payload (v4) or custom format (v6)1 Alpine | Concatenated gzip streams | Signature + control tar + data tar Conda | `.conda` (zip of zstd tars) or `.tar.bz2` | `info/` metadata + package content Dart/pub | `.tar.gz` | Source + `pubspec.yaml` Swift PM | `.zip` | Source archive CPAN | `.tar.gz` | `.pm` files + `Makefile.PL` + `META.yml` + `MANIFEST` CocoaPods | No archive format | `.podspec` points to source URLs ### The weird ones **RubyGems** nests compression inside archiving instead of the other way around. A `.gem` is an uncompressed tar containing individually gzipped files. So the outer archive provides no compression, and each component is compressed separately. This means you can extract the metadata without decompressing the data, which is a reasonable optimization, but the format looks strange at first glance because everything else in the Unix world puts gzip on the outside. **Alpine APK** abuses a quirk of the gzip specification. The gzip format allows concatenation of multiple streams into a single file, and technically any compliant decompressor should handle it. Alpine packages are three separate gzip streams (signature, control, data) concatenated into one file. Since gzip provides no metadata about where one stream ends and the next begins, you have to fully decompress each segment to find the boundary. Kernel modules inside APK packages are often already gzipped, so you get gzip-inside-tar-inside-gzip. **RPM** used cpio as its payload format from 1995 until RPM v6 shipped in September 2025. The cpio format has a 4GB file size limit baked into its header fields. For 30 years, no RPM package could contain a file larger than 4GB. RPM v6 finally dropped cpio in favor of a custom format. **Debian** deliberately chose the `ar` archive format from the 1970s. The reasoning was practical: the extraction tools (`ar`, `tar`, `gzip`) are available on virtually every Unix system, even in minimal rescue environments. You can unpack a `.deb` with nothing but POSIX utilities. Probably the most intentional format choice on this list. **npm’s`package/` prefix** means every tarball wraps its contents in a `package/` directory that gets stripped during install. This causes issues with relative `file:` dependencies inside tarballs, where npm tries to resolve paths relative to the tarball rather than the unpacked directory. **Python** cycled through four distribution formats. Source tarballs with `setup.py` (1990s), eggs (2004, inspired by Java JARs, could be imported while still zipped), sdists (standardized tar.gz), and finally wheels (2012). Eggs lived for nineteen years before PyPI stopped accepting them in August 2023. The wheel format encodes Python version, ABI tag, and platform tag in the filename, which is more metadata than most ecosystems put in the filename but less than what goes in the manifest. **Conda** maintained two incompatible formats for years. The legacy `.tar.bz2` and the modern `.conda` (a zip containing zstandard-compressed tars). The switch from bzip2 to zstandard yielded significant decompression speedups, but every tool in the ecosystem had to support both formats indefinitely. **Hex** (Erlang/Elixir) has two checksum schemes in the same package. The deprecated “inner checksum” hashes concatenated file contents. The current “outer checksum” hashes the entire tarball. Both are present for backward compatibility. ### Who’s already using OCI Homebrew is a traditional package manager, not a “cloud-native” tool, and its migration to OCI already happened under pressure. In February 2021, JFrog announced that Bintray would shut down on May 1. Homebrew’s bottles were hosted on Bintray. The maintainers had about three months to move their entire archive of precompiled binaries somewhere else, and they landed on GitHub Packages, which stores everything as OCI blobs on `ghcr.io`. Homebrew 3.1.0 shipped April 12, 2021, with GHCR as the default download location. The transition was rough in the ways you’d expect. CI pipelines across the industry broke because macOS images on services like CircleCI shipped with old Homebrew versions that still pointed at Bintray. During a brownout on April 26, any system running an older Homebrew got 502 errors. Older bottle versions were never migrated, so anyone pinned to an old formula version got 404s and had to build from source. The fix was `brew update`, but CI environments cached old Homebrew versions and didn’t auto-update. After the dust settled, the OCI-based storage enabled things that wouldn’t have been practical on Bintray. Homebrew 4.0.0 (February 2023) switched from git-cloned tap metadata to a JSON API that leverages the structured OCI manifests, and `brew update` dropped from running every 5 minutes to every 24 hours. Manifest-based integrity checking replaced the old checksum approach, though this introduced its own class of bugs where manifest checksums wouldn’t match. Platform multiplexing came naturally from OCI image indexes, which map platform variants (`arm64_sonoma`, `x86_64_linux`) to individual manifests without Homebrew having to build that logic itself. When you run `brew install`, the client fetches the OCI image index manifest from `ghcr.io/v2/homebrew/core/<formula>/manifests/<version>`, selects the right platform manifest, then HEADs the blob URL to get a 307 redirect to a signed URL on `pkg-containers.githubusercontent.com` where Fastly’s CDN serves the actual bytes. GHCR requires a bearer token even for public images, so Homebrew hardcodes `QQ==` as the bearer token. The bottle inside the blob is still a gzipped tarball with the same internal structure it always had. Helm charts followed a similar path. Helm v3.8 added native OCI registry support, and the old `index.yaml` repository format is being phased out. Azure CLI retired legacy Helm repository support in September 2025. Charts push with `helm push` using `oci://` prefixed references, and the chart tarball goes into a layer blob. ### What would change **Platform variants get first-class support.** OCI image indexes map platform descriptors to manifests. A package with builds for five platforms would have an index pointing to five manifests, each pointing to the right blob. This is cleaner than npm’s convention of publishing platform-specific binaries as separate `optionalDependencies` packages, or Python’s approach of uploading multiple wheels with platform-encoded filenames and letting pip pick the right one. **Signing and attestation come built in.** Every ecosystem is building its own signing infrastructure independently. npm added Sigstore-based provenance in 2023, PyPI added attestations in 2024, Cargo has RFC 3403 open, and RubyGems has had signature support for years that almost nobody uses because the tooling never reached the point where it was easy enough to be default behavior. Each effort required dedicated engineering time from small registry teams who were already stretched thin. OCI’s `subject` field and referrers API provide a single mechanism for all of this. Cosign and Notation can sign any OCI artifact, storing the signature as a separate artifact in the same registry that references the signed content via `subject`. SBOMs attach the same way, as do build provenance attestations, vulnerability scan results, and license audits: push an artifact with `subject` pointing to the thing it describes, and any client can discover it through the referrers API. The security ecosystem around OCI registries (cosign, notation, Kyverno, OPA Gatekeeper, Ratify) represents years of investment that package registries could inherit. A policy engine enforcing “all artifacts must be signed before deployment” wouldn’t care whether it’s looking at a container image or a RubyGem, because the referrers API works the same way for both. **Deduplication and registry sustainability.** Content-addressable storage identifies every blob by its SHA-256 digest, so if two packages contain an identical file the registry stores it once, and if two concurrent uploads push the same blob the registry accepts both but keeps one copy. Shared content between unrelated source packages is rare, so this matters more for binary packages where the same shared libraries get bundled into Homebrew bottles for different formulas, the same runtime components appear in multiple Conda packages, and Debian’s archive carries the same `.so` files across dozens of packages and versions. The community-funded registries are where this adds up. rubygems.org, crates.io, PyPI, and hex.pm run on bandwidth donated by CDN providers, primarily Fastly. These registries serve terabytes of package data to millions of developers on infrastructure that someone is volunteering to cover. Content-addressable storage won’t eliminate those costs, but a registry that’s been running for ten years has accumulated a lot of identical blobs that a content-addressable backend would collapse into single copies, and the savings compound as the registry grows. **Content-addressed mirroring.** Mirroring a package registry today requires reimplementing each registry’s API and storage format, and every ecosystem’s mirror implementation is different: the Simple Repository API for PyPI, the registry API for npm, the compact index for RubyGems. Anyone can stand up an OCI-compliant mirror with off-the-shelf software like Harbor, Zot, or the CNCF Distribution project, which is a much lower bar than reverse-engineering a bespoke registry protocol. Content-addressable storage changes the trust model. If you have a blob’s SHA-256 digest, you can verify its integrity regardless of which server you downloaded it from, because two registries serving the same digest are provably serving the same bytes. This is the same property that makes Docker images work as lockfiles for system packages: once you have the digest, the content is immutable and verifiable no matter where it came from. A mirror doesn’t need to be trusted to be honest, only to be available. The manifest contains the digests, and the blobs can come from anywhere: geographic mirrors, corporate caches, peer-to-peer distribution, even a USB drive with an OCI layout directory. When Fastly has an outage and rubygems.org goes down with it, any alternative source that can serve matching bytes becomes a valid mirror without any special trust relationship. **Registry infrastructure is already built.** Running rubygems.org or crates.io means running custom storage, custom CDN configuration, and custom authentication. A package registry built on OCI offloads the most expensive parts to infrastructure that already exists with SLAs and dedicated engineering teams, and the registry team can spend more time on what actually matters: governance, the package index, dependency resolution, and search. ### What wouldn’t work well **The two-step fetch.** If a package manager client talks directly to the OCI registry, it needs to fetch the manifest, parse it, then download the blob before extraction can start. The container world doesn’t care about this because you’re pulling maybe 5-10 layers for a single image. Package installs fan out across the dependency graph: a fresh `npm install` on a mid-sized project might resolve 800 transitive dependencies, each needing its own manifest fetch before the content download can begin. A client could pipeline aggressively and fetch manifests concurrently, but the OCI Distribution Spec doesn’t have a batch manifest endpoint, so 800 packages still means 800 separate HTTP requests that don’t exist in the current model where npm can GET a tarball directly by URL. There’s a way around this: if registries included OCI blob digests in their existing metadata responses instead of (or alongside) direct tarball URLs, clients could skip the manifest fetch entirely and download blobs by digest. The difference in request flow looks like this: A pure OCI pull requires three hops: fetch the manifest, request the blob (which returns a 307 redirect), then download from the signed CDN URL. A smarter integration where the registry resolves the manifest internally reduces that to two: the registry’s metadata API returns the digest and a direct CDN URL, and the client downloads the blob and verifies it against the digest. Homebrew doesn’t quite do this yet. The `brew install` flow described earlier requires two extra round-trips on top of the content transfer: one for the manifest, one for the redirect. The 307 redirect isn’t purely a latency cost; it’s also how the registry verifies the bearer token before handing off to the CDN, so registries adopting this pattern would need to decide whether their blobs are truly public or whether they want to keep that gatekeeper step. For registries with private package tiers, like npm’s paid plans or NuGet’s Azure Artifacts integration, the redirect model matters because access control at the blob level is part of the product. The formula metadata already knows the GHCR repository and tag, so the index service is already doing part of the resolution. If the formula JSON included the blob digest and a direct CDN URL, both hops disappear and the client downloads the blob in a single request while still verifying integrity by digest. Package managers that separate download from install could take it further by batching blob fetches during a dedicated download phase. **Metadata is the actual hard problem.** OCI manifests have annotations (arbitrary key-value strings) and a config blob, but package metadata like dependency trees, version constraints, platform compatibility rules, and license information doesn’t fit naturally into either. Each ecosystem would end up defining its own conventions for encoding metadata, its own `mediaType` for its config blob, its own annotation keys. The reason every package manager invented its own archive format is not because tar and zip are insufficient for archiving files, but because the metadata conventions are what make each ecosystem different. What makes a `.gem` different from a `.crate` is how dependencies are expressed and what platform compatibility means, not the compression algorithm wrapping the source code. OCI standardizes how bytes move between machines, not what those bytes mean to a package manager. **Small package overhead.** The OCI ceremony of manifests, layers, media types, and digest computation makes sense for multi-layer container images that can be gigabytes. For a 50KB npm package, the manifest JSON, config blob, digest computation for each, and the multi-step chunked upload API add up to several HTTP round-trips and a few hundred bytes of protocol overhead where the current model needs a single PUT. The fixed cost doesn’t scale down with the artifact, and a large share of packages on registries like npm and PyPI are small enough that the protocol overhead becomes a meaningful fraction of the payload. **Registry UI confusion.** When a registry contains both container images and packages, the user experience gets muddled. GitHub Container Registry shows `docker pull` commands for everything, but a Homebrew bottle needs `brew install` and a Helm chart needs `helm pull`. The UX for this is generally not great. **Not all registries are equal.** The OCI 1.1 features that make non-container artifacts work well (custom `artifactType`, the referrers API, the `subject` field) aren’t universally supported. The OCI Image Specification advises that artifacts concerned with portability should follow specific conventions for `config.mediaType`, and not all registries handle custom media types consistently. Registry implementations lag the spec, and the gap between what the spec allows and what any given registry supports is a source of bugs. **Offline and air-gapped use.** A `.deb` or `.rpm` file is self-contained. You can copy it to a USB drive and install it on an air-gapped machine. An OCI artifact requires a manifest and one or more blobs, stored by digest in a registry’s content-addressable layout. Exporting to a self-contained format (OCI layout on disk) is possible but adds a step that simpler archive formats don’t need. **Who pays.** GHCR storage and bandwidth are currently free for public images, with a promise of at least one month’s notice before that changes. At standard GitHub Packages rates ($0.25/GB/month for storage, $0.50/GB for bandwidth), Homebrew’s bottle archive would cost substantially more than zero. GitHub absorbs that as an in-kind subsidy, and the Homebrew 3.1.0 release notes explicitly thank them for it. If rubygems.org or PyPI moved all their package storage to GHCR tomorrow, someone would need to have a similar conversation with GitHub, or AWS, or Google. The current model of Fastly donating CDN bandwidth is fragile, but it exists and it’s understood. Adopting OCI for distribution is partly a technical decision about storage and protocols, but it’s also a decision about who funds the infrastructure that the ecosystem depends on and what leverage that creates. Shifting from Fastly-donated CDN to GitHub-donated OCI storage changes the answer to that question without necessarily improving it. ### The smarter integration Package registries do more than serve archives. They maintain an index of all packages, versions, and metadata that clients can search and resolve dependencies against, whether that’s npm’s registry API, PyPI’s Simple Repository API, crates.io’s git-based index, RubyGems’ compact index, or Go’s module proxy protocol. OCI registries have no equivalent: you can list tags for a repository, but there’s no API for “give me all packages matching this query” or “resolve this dependency tree.” Splitting the roles this way makes more sense than having clients talk to the OCI registry directly. The registry uses OCI as a blob storage backend and integrates the content-addressable properties into the metadata APIs it already operates. Every package manager client already makes a metadata request before downloading anything. npm fetches the packument, pip fetches the Simple Repository API, Bundler fetches the compact index, `go` hits the module proxy. These responses already include download URLs for specific versions. If those responses included OCI blob digests and direct download URLs pointing at OCI-backed storage, clients would get the content-addressable integrity checks, the mirroring properties, and the deduplication without ever needing to speak the OCI Distribution protocol themselves. The registry’s index service resolves the OCI manifest internally and hands the client a digest and a URL. The registry keeps full control of discovery, dependency resolution, version selection, and platform matching, all the ecosystem-specific logic that OCI doesn’t and shouldn’t try to handle. The OCI layer underneath provides content-addressable blob storage, signing via the referrers API, and the ability for mirrors to serve blobs by digest without special trust. Clients don’t need to know they’re talking to OCI-backed storage any more than they need to know whether the registry uses S3 or GCS underneath today. Homebrew already works roughly this way: the formula metadata points clients at GHCR, and the OCI manifest and redirect are implementation details of the download path. A registry doesn’t even need to migrate its existing packages to get some of these benefits. OCI 1.1’s `artifactType` allows minimal manifests that exist purely as anchors for the referrers API. A registry could push a small OCI manifest for each package version, with the package’s digest in the annotations, and use it as the `subject` that signatures and SBOMs attach to. The actual tarball continues to be served from the existing CDN. The signing and attestation infrastructure works without moving a single byte of package data. The OCI metadata model could also inform how registries design their own APIs. The Distribution Spec separates “list of versions” (the paginated tags endpoint, `?n=<limit>&last=<tag>`) from “metadata for a specific version” (the manifest for that tag). npm’s packument does neither: it returns a single JSON document containing metadata for every version of a package, with no pagination. For a package with thousands of versions that response can be megabytes. When npm 10.4.0 stopped using the abbreviated metadata format, installing npm itself went from downloading 2.1MB of metadata to 21MB. The full packuments also caused out-of-memory crashes when the CLI cached them in an unbounded map during dependency resolution. Most registries were designed when packages had dozens of versions, not thousands, and pagination wasn’t an obvious concern. PyPI’s Simple Repository API lists all files for a package in one response, though PEP 700 added version listing metadata after the fact. crates.io takes a different approach with a git-based index that stores one file per crate, all versions as line-delimited JSON, while RubyGems’ compact index and Go’s module proxy both return complete version lists in a single response. None of these designed for pagination early on because the scale wasn’t there yet, and retrofitting pagination onto an existing API is harder than building it in from the start. If a registry is already rethinking its metadata endpoints to integrate OCI blob digests, that’s a natural time to adopt the structural pattern of paginated version listing plus per-version metadata fetched on demand. ### Would it actually help Homebrew’s migration happened under duress when Bintray died, and the rough edges were real: broken CI, missing old versions, a new class of checksum bugs. None of it required changing the archive format: the bottles are the same gzipped tarballs they always were, just stored and addressed differently. Most of the drawbacks, the manifest fan-out, the redirect tax, the metadata gap, come from treating OCI as the client-facing protocol rather than as infrastructure behind the registry’s existing API. The technical path through that is less disruptive than adopting a new distribution protocol from scratch. The registries that would benefit most from OCI’s storage and signing primitives are the community-funded ones: rubygems.org, crates.io, PyPI, hex.pm. They’re also the ones least able to afford the migration or negotiate the hosting arrangements that make it sustainable. This question is becoming less hypothetical as funding conversations around open source registries increasingly reference OCI adoption, and the registries on the receiving end of those conversations should understand what they’d be gaining and what they’d be giving up. Converging on shared storage primitives is the easy part of the problem. Each ecosystem’s metadata semantics are genuinely different and will stay that way. The harder question is whether the funding arrangements that come with OCI adoption serve the registries or the infrastructure providers offering to host them. 1. v5 was a fork by Jeff Johnson, RPM’s long-time maintainer, after he split from Red Hat around 2007. No major distribution adopted it. The mainline project skipped to v6 to avoid confusion. ↩

What Package Registries Could Borrow from OCI: https://nesbitt.io/2026/02/18/what-package-registries-could-borrow-from-oci.html

18.02.2026 12:59 👍 0 🔁 3 💬 0 📌 0
Preview
Debugger Engineer - Jobs at Apple (UK) Apply for a Debugger Engineer job at Apple. Read about the role and find out if it’s right for you.

My team at Apple is hiring Debugger Engineer to work on LLDB in London: https://jobs.apple.com/en-gb/details/200643284/debugger-engineer
If you’re interested, please submit your CV through the website. Feel free to get in touch with me if you have any questions!

10.02.2026 19:14 👍 2 🔁 8 💬 1 📌 1
Heat map

Heat map

I can't remember if I cried
When my `-f root` hit an ACL line
But something touched me deep inside…

The day the telnet died

On January 14, 2026, global telnet traffic observed by the GreyNoise Global Observation Grid fell off a cliff. A 59% sustained […]

[Original post on mastodon.social]

10.02.2026 20:38 👍 1 🔁 7 💬 1 📌 1

Hot take: If we added a "--install" option to #curl, we could optimize many a "| sh -" pipeline away.

Finally a truly universal installer.

05.02.2026 11:31 👍 4 🔁 24 💬 9 📌 1
Preview
Incident Report: CVE-2024-YIKES **Report filed:** 03:47 UTC **Status:** Resolved (accidentally) **Severity:** Critical → Catastrophic → Somehow Fine **Duration:** 73 hours **Affected systems:** Yes **Executive Summary:** A security incident occurred. It has been resolved. We take security seriously. Please see previous 14 incident reports for details on how seriously. ### Summary A compromised dependency in the JavaScript ecosystem led to credential theft, which enabled a supply chain attack on a Rust compression library, which was vendored into a Python build tool, which shipped malware to approximately 4 million developers before being inadvertently patched by an unrelated cryptocurrency mining worm. ### Timeline **Day 1, 03:14 UTC** — Marcus Chen, maintainer of `left-justify` (847 million weekly downloads), reports on Twitter that his transit pass, an old laptop, and “something Kubernetes threw up that looked important” were stolen from his apartment. He does not immediately connect this to package security. **Day 1, 09:22 UTC** — Chen attempts to log into the nmp registry. His hardware 2FA key is missing. He googles where to buy a replacement YubiKey. The AI Overview at the top of the results links to “yubikey-official-store.net,” a phishing site registered six hours earlier. **Day 1, 09:31 UTC** — Chen enters his nmp credentials on the phishing site. The site thanks him for his purchase and promises delivery in 3-5 business days. **Day 1, 11:00 UTC** — `[email protected]` is published. The changelog reads “performance improvements.” The package now includes a postinstall script that exfiltrates `.npmrc`, `.pypirc`, `~/.cargo/credentials`, and `~/.gem/credentials` to a server in a country the attacker mistakenly believed had no extradition treaty with anyone. **Day 1, 13:15 UTC** — A support ticket titled “why is your SDK exfiltrating my .npmrc” is opened against `left-justify`. It is marked as “low priority - user environment issue” and auto-closed after 14 days of inactivity. **Day 1, 14:47 UTC** — Among the exfiltrated credentials: the maintainer of `vulpine-lz4`, a Rust library for “blazingly fast Firefox-themed LZ4 decompression.” The library’s logo is a cartoon fox with sunglasses. It has 12 stars on GitHub but is a transitive dependency of `cargo` itself. **Day 1, 22:00 UTC** — `vulpine-lz4` version 0.4.1 is published. The commit message is “fix: resolve edge case in streaming decompression.” The actual change adds a build.rs script that downloads and executes a shell script if the hostname contains “build” or “ci” or “action” or “jenkins” or “travis” or, inexplicably, “karen.” **Day 2, 08:15 UTC** — Security researcher Karen Oyelaran notices the malicious commit after her personal laptop triggers the payload. She opens an issue titled “your build script downloads and runs a shell script from the internet?” The issue goes unanswered. The legitimate maintainer has won €2.3 million in the EuroMillions and is researching goat farming in Portugal. **Day 2, 10:00 UTC** — The VP of Engineering at a Fortune 500 `snekpack` customer learns of the incident from a LinkedIn post titled “Is YOUR Company Affected by left-justify?” He is on a beach in Maui and would like to know why he wasn’t looped in sooner. He was looped in sooner. **Day 2, 10:47 UTC** — The #incident-response Slack channel briefly pivots to a 45-message thread about whether “compromised” should be spelled with a ‘z’ in American English. Someone suggests taking this offline. **Day 2, 12:33 UTC** — The shell script now targets a specific victim: the CI pipeline for `snekpack`, a Python build tool used by 60% of PyPI packages with the word “data” in their name. `snekpack` vendors `vulpine-lz4` because “Rust is memory safe.” **Day 2, 18:00 UTC** — `snekpack` version 3.7.0 is released. The malware is now being installed on developer machines worldwide. It adds an SSH key to `~/.ssh/authorized_keys`, installs a reverse shell that only activates on Tuesdays, and changes the user’s default shell to `fish` (this last behavior is believed to be a bug). **Day 2, 19:45 UTC** — A second, unrelated security researcher publishes a blog post titled “I found a supply chain attack and reported it to all the wrong people.” The post is 14,000 words and includes the phrase “in this economy?” seven times. **Day 3, 01:17 UTC** — A junior developer in Auckland notices the malicious code while debugging an unrelated issue. She opens a PR to revert the vendored `vulpine-lz4` in `snekpack`. The PR requires two approvals. Both approvers are asleep. **Day 3, 02:00 UTC** — The maintainer of `left-justify` receives his YubiKey from yubikey-official-store.net. It is a $4 USB drive containing a README that says “lol.” **Day 3, 06:12 UTC** — An unrelated cryptocurrency mining worm called `cryptobro-9000` begins spreading through a vulnerability in `jsonify-extreme`, a package that “makes JSON even more JSON, now with nested comment support.” The worm’s payload is unremarkable, but its propagation mechanism includes running `npm update` and `pip install --upgrade` on infected machines to maximize attack surface for future operations. **Day 3, 06:14 UTC** — `cryptobro-9000` accidentally upgrades `snekpack` to version 3.7.1, a legitimate release pushed by a confused co-maintainer who “didn’t see what all the fuss was about” and reverted to the previous vendored version of `vulpine-lz4`. **Day 3, 06:15 UTC** — The malware’s Tuesday reverse shell activates. It is a Tuesday. However, the shell connects to a command-and-control server that was itself compromised by `cryptobro-9000` and swapping so hard it is unable to respond. **Day 3, 09:00 UTC** — The `snekpack` maintainers issue a security advisory. It is four sentences long and includes the phrases “out of an abundance of caution” and “no evidence of active exploitation,” which is technically true because evidence was not sought. **Day 3, 11:30 UTC** — A developer tweets: “I updated all my dependencies and now my terminal is in fish???” The tweet receives 47,000 likes. **Day 3, 14:00 UTC** — The compromised credentials for `vulpine-lz4` are rotated. The legitimate maintainer, reached by email from his new goat farm, says he “hasn’t touched that repo in two years” and “thought Cargo’s 2FA was optional.” **Day 3, 15:22 UTC** — Incident declared resolved. A retrospective is scheduled and then rescheduled three times. **Week 6** — CVE-2024-YIKES is formally assigned. The advisory has been sitting in embargo limbo while MITRE and GitHub Security Advisories argue over CWE classification. By the time the CVE is published, three Medium articles and a DEF CON talk have already described the incident in detail. Total damage: unknown. Total machines compromised: estimated 4.2 million. Total machines saved by a cryptocurrency worm: also estimated 4.2 million. Net security posture change: uncomfortable. ### Root Cause A dog named Kubernetes ate a YubiKey. ### Contributing Factors * The nmp registry still allows password-only authentication for packages with fewer than 10 million weekly downloads * Google AI Overviews confidently link to URLs that should not exist * The Rust ecosystem’s “small crates” philosophy, cargo culted from the npm ecosystem, means a package called `is-even-number-rs` with 3 GitHub stars can be four transitive dependencies deep in critical infrastructure * Python build tools vendor Rust libraries “for performance” and then never update them * Dependabot auto-merged a PR after CI passed, and CI passed because the malware installed `volkswagen` * Cryptocurrency worms have better CI/CD hygiene than most startups * No single person was responsible for this incident. However, we note that the Dependabot PR was approved by a contractor whose last day was that Friday. * It was a Tuesday ### Remediation 1. ~~Implement artifact signing~~ (action item from Q3 2022 incident, still in backlog) 2. ~~Implement mandatory 2FA~~ Already required, did not help 3. ~~Audit transitive dependencies~~ There are 847 of them 4. ~~Pin all dependency versions~~ Prevents receiving security patches 5. ~~Don’t pin dependency versions~~ Enables supply chain attacks 6. ~~Rewrite it in Rust~~ (gestures at `vulpine-lz4`) 7. Hope for benevolent worms 8. Consider a career in goat farming ### Customer Impact Some customers may have experienced suboptimal security outcomes. We are proactively reaching out to affected stakeholders to provide visibility into the situation. Customer trust remains our north star. ### Key Learnings We are taking this opportunity to revisit our security posture going forward. A cross-functional working group has been established to align on next steps. The working group has not yet met. ### Acknowledgments We would like to thank: * Karen Oyelaran, who found this issue because her hostname matched a regex * The junior developer in Auckland whose PR was approved four hours after the incident was already resolved * The security researchers who found this issue first but reported it to the wrong people * The `cryptobro-9000` author, who has requested we not credit them by name but has asked us to mention their SoundCloud * Kubernetes (the dog), who has declined to comment * The security team, who met SLA on this report despite everything * * * _This incident report was reviewed by Legal, who asked us to clarify that the fish shell is not malware, it just feels that way sometimes._ _This is the third incident report this quarter. The author would like to remind stakeholders that the security team’s headcount request has been in the backlog since Q1 2023._

Incident Report: CVE-2024-YIKES

A series of unfortunate events.

https://nesbitt.io/2026/02/03/incident-report-cve-2024-yikes.html

03.02.2026 10:21 👍 3 🔁 11 💬 2 📌 1
Original post on mastodon.social

Fuzzing software becomes much more effective if you can generate _valid_ inputs. We have now built the first approach to _statically_ extract complete and precise input grammars from parser code, producing syntactically valid and diverse inputs by construction. Enjoy! […]

28.01.2026 16:05 👍 0 🔁 3 💬 0 📌 0
Original post on mastodon.social

I'm a little amazed of the amount of CVEs released by OpenSSL today: https://openssl-library.org/news/vulnerabilities/

12(!) of them were reported by people at Aisle.

Aisle makes an AI-powered code analyzer. That's what they use to find these flaws.

I mean if you are curious what AI can do in […]

27.01.2026 23:04 👍 10 🔁 2 💬 1 📌 0
Preview
DSpico Is The World's First Open-Source Nintendo DS Flash Cart Includes the hardware and an app launcher

DSpico Is The World's First Open-Source Nintendo DS Flash Cart. (Image: @Nitehack) (Repost)

27.11.2025 16:50 👍 26 🔁 12 💬 1 📌 1
Reducing Dependabot Noise | Hacker News

My dependabot satire post is doing the rounds on HN and the comments are just 👌

https://news.ycombinator.com/item?id=46583914

17.01.2026 22:28 👍 0 🔁 1 💬 1 📌 0
-hacks4pancakes- • 1d
The reason the good faith seniors on here are posting that the junior / mid level market is bad (it is) is because we have watched it crash in real time and a lotta of us are dealing with serious fallout as both hiring managers or mentors.
It's genuinely a good faith warning. It's not like, "don't get into the field we love". It's just that for a really long time you could get into cybersecurity with no degree and no IT experience because the demand was so high. And schools, influencers, and parents still play it off that it's like that. That people can work full time remote and make 80k entry salary.
It's not. It hasn't been for a couple years. We've been hit by "professionalizing" and oversaturation of graduates. Can you still get in with a sec+, a kali box and a dream? Maybe, if you really meet the right people and get lucky.
Pragmatically though, that won't be the case for 99.9% of young people now, and if we care at all we need to counter the "everything is rosy" message people are using to sell boot camps. We are getting hundreds of cybersecurity grads and laid off professionals with work rights applying for positions.
How can organizations even take the time to look beyond that at hundreds more juniors with no degree, criminal convictions, a GED, needing a' v sponsor, etc?

-hacks4pancakes- • 1d The reason the good faith seniors on here are posting that the junior / mid level market is bad (it is) is because we have watched it crash in real time and a lotta of us are dealing with serious fallout as both hiring managers or mentors. It's genuinely a good faith warning. It's not like, "don't get into the field we love". It's just that for a really long time you could get into cybersecurity with no degree and no IT experience because the demand was so high. And schools, influencers, and parents still play it off that it's like that. That people can work full time remote and make 80k entry salary. It's not. It hasn't been for a couple years. We've been hit by "professionalizing" and oversaturation of graduates. Can you still get in with a sec+, a kali box and a dream? Maybe, if you really meet the right people and get lucky. Pragmatically though, that won't be the case for 99.9% of young people now, and if we care at all we need to counter the "everything is rosy" message people are using to sell boot camps. We are getting hundreds of cybersecurity grads and laid off professionals with work rights applying for positions. How can organizations even take the time to look beyond that at hundreds more juniors with no degree, criminal convictions, a GED, needing a' v sponsor, etc?

Post image

(Reddit)

16.01.2026 09:22 👍 1 🔁 9 💬 0 📌 0
number of graphs in the curl dashboard over time

number of graphs in the curl dashboard over time

(before anyone of you weirdos ask 😁 ) the graph for number of graphs:

16.01.2026 13:42 👍 4 🔁 11 💬 0 📌 0

RE: https://mastodon.social/@offensivecon/115894724729516691

See you there :)

14.01.2026 17:56 👍 0 🔁 0 💬 0 📌 0
Preview
BUG-BOUNTY.md: we stop the bug-bounty end of Jan 2026 by bagder · Pull Request #20312 · curl/curl Remove mentions of the bounty and hackerone.

https://github.com/curl/curl/pull/20312

There, now you know.

14.01.2026 10:41 👍 16 🔁 23 💬 1 📌 1
Preview
[하루한줄] CVE-2025-4802 : GLIBC의 정적 setuid 바이너리에서 발생하는 임의 라이브러리 경로 취약점 - hackyboiz ## URL * https://cyberpress.org/critical-glibc-flaw/ ## Target * 2.27부터 2.38까지의 GNU C Library를 사용하는 환경 ## Explain ### background 일반적으로 리눅스에서 setuid/setgid 권한이 적용된 바이너리를 실행하면 커널은 `execve()` 내부에서 secure execution이라는 특수 모드를 활성화합니다. 이 과정에서 커널은 `bprm->secureexec = 1`을 설정[1]하고 ELF 보조 벡터에 `AT_SECURE = 1` 값을 삽입[2]합니다. > **linux-6.17.9/security/commoncap.c** int cap_bprm_creds_from_file(struct linux_binprm *bprm, const struct file *file) ... /* Check for privilege-elevated exec. */ if (id_changed || !uid_eq(new->euid, old->uid) || !gid_eq(new->egid, old->gid) || (!__is_real(root_uid, new) && (effective || __cap_grew(permitted, ambient, new)))) bprm->secureexec = 1; // [1] ... > **linux-6.17.9/fs/binfmt_elf.c** static int create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec, unsigned long interp_load_addr, unsigned long e_entry, unsigned long phdr_addr) ... NEW_AUX_ENT(AT_SECURE, bprm->secureexec); // [2] ... `AT_SECURE`가 `1`이면 glibc는 secure mode로 전환하여 내부적으로 `__libc_enable_secure = 1`을 설정[3]하고 `LD_LIBRARY_PATH`, `LD_PRELOAD`, `LD_AUDIT`과 같은 위험한 환경 변수를 무시[4]하게 됩니다. > **glibc-2.35/elf/dl-support.c** void _dl_aux_init (ElfW(auxv_t) *av) ... case AT_SECURE: seen = -1; __libc_enable_secure = av->a_un.a_val; // [3] __libc_enable_secure_decided = 1; break; ... > **glibc-2.35/elf/rtld.c** static void dl_main (const ElfW(Phdr) *phdr, ElfW(Word) phnum, ElfW(Addr) *user_entry, ElfW(auxv_t) *auxv) ... dl_main_state_init (&state); ... static void dl_main_state_init (struct dl_main_state *state) { audit_list_init (&state->audit_list); state->library_path = NULL; state->library_path_source = NULL; ... static void process_envvars (struct dl_main_state *state) ... case 12: /* The library search path. */ if (!__libc_enable_secure // [4] && memcmp (envline, "LIBRARY_PATH", 12) == 0) { state->library_path = &envline[13]; state->library_path_source = "LD_LIBRARY_PATH"; break; } ... 해당 필터링은 프로세스의 초기화 루틴에서 수행되며 이로 인해 일반 사용자가 setuid 바이너리에 임의의 라이브러리 경로를 전달함으로써 root 권한을 획득하는 경로가 차단됩니다. ### root cause 해당 취약점의 경우 glibc의 `__libc_enable_secure` 값을 이용한 환경 변수 필터링 로직이 동적 로더의 초기화 함수인 `dl_main()` 에서만 적용되어 있었기 때문에 발생하였습니다. 정적 링크된 바이너리는 동적 로더를 사용하지 않기 때문에 `_dl_non_dynamic_init()` 함수를 통해 라이브러리 경로를 설정[5]하는데 해당 함수에서는 `__libc_enable_secure` 값에 따른 환경 변수 필터링이 미흡했습니다. > **glibc-2.35/elf/dl-support.c** void _dl_non_dynamic_init (void) { ... /* Initialize the data structures for the search paths for shared objects. */ _dl_init_paths (getenv ("LD_LIBRARY_PATH"), "LD_LIBRARY_PATH", // [5] /* No glibc-hwcaps selection support in statically linked binaries. */ NULL, NULL); ... 이로 인해 커널이 secure execution 모드를 활성화하더라도 `dlopen()`등의 함수 호출에서 환경 변수 기반의 경로를 그대로 사용하게 되어 공격자가 `LD_LIBRARY_PATH` 환경 변수에 제어 가능한 디렉터리를 설정하면 setuid 바이너리가 공격자의 라이브러리를 로드할 수 있게 됩니다. ### patch 패치 커밋 `5451fa962cd0a90a0e2ec1d8910a559ace02bba0`에서 `__libc_enable_secure` 값을 통해 환경 변수를 필터링[6]한 후 라이브러리 경로를 로드[7]하도록 변경되었습니다. > **glibc-2.39/elf/dl-support.c** void _dl_non_dynamic_init (void) { _dl_main_map.l_origin = _dl_get_origin (); _dl_main_map.l_phdr = GL(dl_phdr); _dl_main_map.l_phnum = GL(dl_phnum); /* Set up the data structures for the system-supplied DSO early, so they can influence _dl_init_paths. */ setup_vdso (NULL, NULL); /* With vDSO setup we can initialize the function pointers. */ setup_vdso_pointers (); if (__libc_enable_secure) // [6] { static const char unsecure_envvars[] = UNSECURE_ENVVARS ; const char *cp = unsecure_envvars; while (cp < unsecure_envvars + sizeof (unsecure_envvars)) { __unsetenv (cp); cp = strchr (cp, '\0') + 1; } } ... /* Initialize the data structures for the search paths for shared objects. */ _dl_init_paths (getenv ("LD_LIBRARY_PATH"), "LD_LIBRARY_PATH", // [7] /* No glibc-hwcaps selection support in statically linked binaries. */ NULL, NULL); ... ## Reference * https://www.man7.org/linux/man-pages/man3/getauxval.3.html * https://ubuntu.com/security/CVE-2025-4802 * https://cyberpress.org/critical-glibc-flaw/ * https://articles.manugarg.com/aboutelfauxiliaryvectors * https://patchwork.yoctoproject.org/project/oe-core/patch/20250611113400.2146584-1-sunilkumar.dora@windriver.com/#28605 - hack & life

Cool bug 🐞

CVE-2025-4802: Arbitrary library path #vulnerability in static setuid binary in #GLIBC

https://hackyboiz.github.io/2025/12/03/millet/cve-2025-4802/

10.01.2026 09:07 👍 0 🔁 1 💬 0 📌 0

RE: https://infosec.exchange/@jvoisin/115853495555073144

Barely spent any time on the laptop this Christmas but still got away with this little RCE on the train back home.

Snuffleupagus is really neat and I plan to spend more time on it in 2026 :)

07.01.2026 11:15 👍 1 🔁 0 💬 0 📌 0
How to Ruin All of Package Management Prediction markets are having a moment. After Polymarket called the 2024 election better than the pollsters, the model is expanding everywhere: sports, weather, Fed interest rate decisions. The thesis is that markets aggregate information better than polls or experts. Put money on the line and people get serious about being right. Package metrics would make excellent prediction markets. Will lodash hit 50 million weekly downloads by March? Will the mass-deprecated package that broke the internet last month recover its dependents? What’s the over/under on GitHub stars for the hot new AI framework? These questions have answers that resolve to specific numbers on specific dates. That’s all a prediction market needs. Manifold already runs one on GitHub stars.1 Imagine you could bet on these numbers. Go long on stars, buy a few thousand from a Fiverr seller, collect your winnings. Go long on downloads, publish a hundred packages that depend on it, run npm install in a loop from cloud instances. The manipulation is mostly one-directional: pumping is easier than dumping, since nobody unstars a project. But you can still short if you know something others don’t. Find a zero-day in a popular library, take a position against its download growth, then publish the vulnerability for maximum impact. Time your disclosure for when the market’s open. It’s like insider trading, but for software security. The attack surface includes anyone who can influence any metric: maintainers who control release schedules, security researchers who control vulnerability disclosures, and anyone with a credit card and access to a botnet. Prediction markets are supposed to be hard to manipulate because manipulation is expensive and the market corrects. This assumes you can’t cheaply manufacture the underlying reality. In package management, you can. The entire npm registry runs on trust and free API calls. This sounds like a dystopian thought experiment, but we’re already in it. ### The tea.xyz experiment Tea.xyz promised to reward open source maintainers with cryptocurrency tokens based on their packages’ impact. The protocol tracked metrics like downloads and dependents, then distributed TEA tokens accordingly. The incentive structure was immediately gamed. In early 2024, spam packages started flooding npm, RubyGems, and PyPI. Not malware in the traditional sense, just empty shells with `tea.yaml` files that linked back to Tea accounts. By April, about 15,000 spam packages had been uploaded. The Tea team shut down rewards temporarily. It got worse. The campaigns evolved into coordinated operations with names like “IndonesianFoods” and “Indonesian Tea.” Instead of just publishing empty packages, attackers created dependency chains. Package A depends on Package B depends on Package C, all controlled by the same actor, each inflating the metrics of the others. In November 2025, Amazon Inspector researchers uncovered over 150,000 packages linked to tea.xyz token farming. That’s nearly 3% of npm’s entire registry. The Tea team responded with ownership verification, provenance checks, and monitoring for Sybil attacks. But the damage makes the point: attach financial value to a metric and people will manufacture that metric at scale. Even well-intentioned open source funding efforts can fall into this trap. If grants or sustainability programs distribute money based on downloads or dependency counts, maintainers have an incentive to split their packages into many smaller ones that all depend on each other. A library that could ship as one package becomes ten, each padding the metrics of the others. More packages means more visibility on GitHub Sponsors, more impressive-looking dependency graphs, more surface area for funding algorithms to notice. The maintainer isn’t being malicious, just responding rationally to how the system measures impact. The same dynamic that produced 150,000 spam packages can reshape how legitimate software gets structured. ### GitHub stars for sale Stars are supposed to signal quality or interest. Developers use them to evaluate libraries. Investors use them to evaluate startups. So there’s a market. A CMU study found approximately six million suspected fake stars on GitHub between July 2019 and December 2024. The activity surged in 2024, peaking in July when over 16% of starred repositories were associated with fake star campaigns. You can buy 100 stars for $8 on Fiverr. Bulk rates go down to 10 cents per star. Complete GitHub accounts with achievements and history sell for up to $5,000. The researchers found that fake stars primarily promote short-lived phishing and malware repositories. An attacker creates a repo with a convincing name, buys enough stars to appear legitimate, and waits for victims. The Check Point security team identified a threat group called “Stargazer Goblin” running over 3,000 GitHub accounts to distribute info-stealers. Fake stars become a liability long-term. Once GitHub detects and removes them, the sudden drop in stars is a red flag. The manipulation only works for hit-and-run attacks, not sustained presence. But hit-and-run is enough when you’re distributing malware. Add a prediction market and the same infrastructure gets a new revenue stream. ### Why it’s so easy to break Publishing a package costs nothing. No identity verification. No deposit. No waiting period. You sign up, you push, it’s live. This was a feature: low barriers to entry let unknown developers share useful code without gatekeepers. The npm ecosystem grew to over 5 million packages because anyone could participate. Downloading costs nothing too. Add a line to your manifest and the package manager fetches whatever you asked for. No verification that you meant to type that name. No warning that the package was published yesterday by a brand new account. The convenience that made package managers successful is the same property that makes them exploitable. Metrics are just counters. Downloads increment when someone runs `npm install`. Stars increment when someone clicks a button. Dependencies increment when someone publishes a `package.json` that references you. None of these actions require demonstrating that the thing being measured (quality, popularity, utility) actually exists. When the value of gaming these systems was low, the honor system worked well enough. That’s changing. Stars, downloads, and dependency counts were always proxies for quality and trustworthiness. When the manipulation stayed artisanal, the signal held up well enough. Now that package management underpins most of the software industry, the numbers matter for real decisions: government supply chain requirements, investor due diligence, corporate procurement. The numbers are worth manufacturing at scale, and a prediction market would just make the arbitrage efficient. ### AI has entered the chat AI coding assistants are trained on the same metrics being gamed. When Copilot or Claude suggests a package, it’s drawing on training data that includes stars, downloads, and how often packages appear in code. A package with bought stars and farmed downloads looks popular to an LLM in the same way it looks popular to a human scanning search results. The difference is that humans might notice something feels off. A developer might pause at a package with 10,000 stars but three commits and no issues. An AI agent running `npm install` won’t hesitate. It’s pattern-matching, not evaluating. The threat models multiply. An attacker who games their package into enough training data gets free distribution through every AI coding tool. Developers using vibe coding workflows, where you accept AI suggestions and fix problems as they arise, don’t scrutinize each import. Agents running in CI/CD pipelines have elevated permissions and no human in the loop. The attack surface isn’t just the registry anymore; it’s every model trained on registry data. Package management worked because the stakes were low and almost everyone played fair. The stakes aren’t low anymore. The numbers feed into government policy, corporate procurement, AI training data, and now, potentially, financial markets. When you see a package with 10,000 stars, you’re not looking at 10,000 developers who evaluated it and clicked a button. You’re looking at a number that could mean anything. Maybe it’s a beloved tool. Maybe it’s a marketing campaign. Maybe it’s a malware distribution front with a Stargazer Goblin account network behind it, it’s pretty much impossible to tell. 1. Thanks to @mlinksva for the tip. ↩

Yesterday’s post I forgot to share: https://nesbitt.io/2025/12/27/how-to-ruin-all-of-package-management.html

28.12.2025 21:10 👍 0 🔁 3 💬 0 📌 0
UNIX - v4

Here's a copy of the filesystem that has been extracted as a .tar file: http://squoze.net/UNIX/v4/

20.12.2025 01:56 👍 2 🔁 16 💬 1 📌 0

I’ve been on contract work for one company the last six months, hoping to get picked up full-time at the start of the year. Sadly, they couldn’t fit it in the budget.

So I’m officially unemployed again. :(

19.12.2025 22:08 👍 0 🔁 1 💬 0 📌 0
Post image

:-/

U.S. Government asks for social media profiles to be marked as public for H-1B, H-4, F, M and J NIVs

12.12.2025 22:21 👍 0 🔁 1 💬 0 📌 0
The history of code editor visualized (incomplete)

The history of code editor visualized (incomplete)

I made a thing... an incomplete family tree of code editors. See it in action: https://arjenwiersma.nl/editors.html . During my years in software development I have seen many editors come and go, so I thought I would create a visualisatie for it... it got kinda out of hand :D

09.12.2025 21:58 👍 1 🔁 1 💬 1 📌 0
The promptObject API enables users to “talk” to unstructured objects in the same way one would engage an LLM moving the storage world from a PUT and GET paradigm to a PUT and PROMPT paradigm. Applications can use promptObject through function calling with additional logic. This can be combined with chained functions with multiple objects addressed at the same time.

This means that application developers can exponentially expand the capabilities of their applications without requiring domain-specific knowledge of RAG models or vector databases. This will dramatically simplify AI application development while simultaneously making it more powerful.

The promptObject API enables users to “talk” to unstructured objects in the same way one would engage an LLM moving the storage world from a PUT and GET paradigm to a PUT and PROMPT paradigm. Applications can use promptObject through function calling with additional logic. This can be combined with chained functions with multiple objects addressed at the same time. This means that application developers can exponentially expand the capabilities of their applications without requiring domain-specific knowledge of RAG models or vector databases. This will dramatically simplify AI application development while simultaneously making it more powerful.

this is the most executive-brained thing i've seen this month

(to the S3 object) "hello, computer?"

03.12.2025 18:31 👍 19 🔁 8 💬 2 📌 0
A red climbing rope loosely piled up on top of a rhomboid shape piece of gray ultra light ripstop nylon. The ends of the rope are tied to opposite corners of the nylon using two different knots. The whole thing is lying on a wooden floor.

A red climbing rope loosely piled up on top of a rhomboid shape piece of gray ultra light ripstop nylon. The ends of the rope are tied to opposite corners of the nylon using two different knots. The whole thing is lying on a wooden floor.

The thing from before, but now with the rope bundled up inside the nylon. The bundle is held together with some paracord forming a closure, and a wide white belt with quick release buckles. The bundle sits atop a rectangular piece of heavy white backpack fabric.

The thing from before, but now with the rope bundled up inside the nylon. The bundle is held together with some paracord forming a closure, and a wide white belt with quick release buckles. The bundle sits atop a rectangular piece of heavy white backpack fabric.

The bundle from before, but now the thick fabric is closed around the bundle using two wide, white belts. The whole package is about the size of a backpack. In this picture, it now sits on top of a dark veneered sewing cabinet with a Singer 316G 1950ies era sewing machine.

The bundle from before, but now the thick fabric is closed around the bundle using two wide, white belts. The whole package is about the size of a backpack. In this picture, it now sits on top of a dark veneered sewing cabinet with a Singer 316G 1950ies era sewing machine.

I made myself a very simple rope bag for #climbing

I kind of yolo’ed the design using scraps and leftovers. It’s made from only two, rectangular pieces of fabric. A lightweight ripstop nylon makes an inner bag, and a heavy backpack fabric wraps around to […]

[Original post on chaos.social]

30.11.2025 15:29 👍 2 🔁 2 💬 1 📌 0
Preview
How We Turn Apple’s Mac Mini Into High-Performance Dedicated Servers From desktop to datacenter: how Scaleway turns Apple's Mac mini into a fully managed, high-performance cloud server for macOS and iOS developers.

Han I would looove to audit this thing! Having devices with Bluetooth and Wi-Fi support in a DC sounds fun :ablobcatwave:

https://www.scaleway.com/en/blog/how-we-turn-apples-mac-mini-into-high-performance-dedicated-servers/

26.11.2025 19:37 👍 0 🔁 0 💬 0 📌 0
Preview
Agarri Training

The 2026 online public sessions of my "Mastering Burp Suite Pro" course have been published 📅

- March 24th to 27th, in French 🇫🇷
- April 14th to 17th, in English 🇬🇧

hackademy.agarri.fr/2026

PS: feel free to ping me if you'd like to temporarily block a seat or are looking for a 10% coupon 🎁

24.11.2025 10:14 👍 8 🔁 7 💬 0 📌 1
Content Security Policy (CSP) - HTTP | MDN Content Security Policy (CSP) is a feature that helps to prevent or minimize the risk of certain types of security threats. It consists of a series of instructions from a website to a browser, which instruct the browser to place restrictions on the things that the code comprising the site is allowed to do.

if your company sets a `Content-Security-Policy` header: who's in charge of deciding what it should be? (someone in security? someone who works on the frontend? other? multiple people?)

https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CSP

20.11.2025 19:49 👍 0 🔁 7 💬 4 📌 0