Theory of Computing Report's Avatar

Theory of Computing Report

@theory.report

[bridged from https://theory.report/ on the web: https://fed.brid.gy/web/theory.report ]

34
Followers
0
Following
2,719
Posts
01.01.0001
Joined
Posts Following

Latest posts by Theory of Computing Report @theory.report

Preview
𝗣𝗼𝘀𝘁-𝗗𝗼𝗰𝘁𝗼𝗿𝗮𝗹 𝗙𝗲𝗹𝗹𝗼𝘄𝘀𝗵𝗶𝗽𝘀 at Indian Insti tute of Science (IISc), Bengaluru (apply by February 28, 2026) The Algorithms group at IISc Bengaluru invites posdoc applications. Areas include Approximation/Online Algorithms, Game Theory, Computational Geometry, Optimization, Learning, and more. Fellowship: ₹80,000–1,30,000/month + travel/research grant. Faculty: Siddharth Barman, Arindam Khan, Anand Louis, Rahul Saladi. 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗟𝗶𝗻𝗸: https://forms.gle/moz2vx7tiNFCbXmS6 Website: https://www.linkedin.com/posts/arindam-khan-445ab615_postdoc-jobs-algorithms-activity-7427285529886580736-6tUk?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAMwLfUBJutL4b0gGJNLzdNG9Zwai-rt7_M Email: ARINDAMKHAN@IISC.AC.IN By shacharlovett
11.02.2026 10:10 👍 0 🔁 0 💬 0 📌 0
TR26-014 | A Fourier-Analytic Switching Lemma over F_p and the AC^0 Lower Bound for Generalized Parity | Yipin Wang We prove a switching lemma for constant-depth circuits over the alphabet $F_p$ with generalized AND/OR gates, extending Tal's Fourier-analytic approach from the Boolean setting. The key new ingredient is a direct computation of the $L_1$ Fourier mass of AND/OR gates over $F_p$, which yields an exact closed-form expression for the expected high-degree Fourier mass after a random restriction. Combined with a Markov inequality argument, this gives a switching lemma with an explicit, prime-independent structure: $Pr[DT(f|\rho) \geq s] \leq (ep \cdot qK / ((p-1)s))^s$. As a consequence, we obtain that for any prime $p$, constant-depth circuits of sub-exponential size over $F_p$ cannot compute $1[\sum_i x_i \equiv 0 \pmod{p}]$.
09.02.2026 01:57 👍 0 🔁 0 💬 0 📌 0
I used to think historians in the future will have too much to work with. I could be wrong (I thought I had already posted this but the blogger system we use say I didn't. Apologies if I did. Most likely is that I posted something similar. When you blog for X years you forget what you've already blogged on.) Historians who study ancient Greece often have to work with fragments of text or just a few pottery shards. Nowadays we preserve so much that historians 1000 years from now will have an easier time. Indeed, they may have too much to look at; and have to sort through news, fake news, opinions, and satires, to figure out what was true. The above is what I used to think. But I could be wrong. 1) When technology changes stuff is lost. E.g., Floppy disks. 2) (This inspiration for this blog post) Harry Lewis gave a talk in Zurich on _The Birth of Binary: Leibniz and the origins of computer arithmetic_ On Dec 8, 2022 at 1:15PM-3:30PM Zurich time. I didn't watch it live (to early in the morning, east coast time) but it was taped and I watched a recording later. Yeah! His blog about it (see here) had a pointer to the video, and my blog about it (see here) had a pointer to both the video and to his blog. A while back I was writing a blog post where I wanted to point to the video. My link didn't work. His link didn't work. I emailed him asking where it was. IT IS LOST FOREVER! Future Historians will not know about Leibniz and binary! Or they might--- he has a book on the topic that I reviewed here. But what if the book goes out of print and the only information on this topic is my review of his book? 3) Entire journals can vanish. I blogged about that here. 4) I am happy that the link to the Wikipedia entry on Link Rot (see here) has not rotted. 5) I did a post on what tends to NOT be recorded and hence may be lost forever here. 6) (This is bigger topic than my one point here.) People tend to OWN less than they used to. DVDs-don't bother buying! Whatever you want is on streaming (I recently watched, for the first time, Buffy the Vampire Slayer, one episode a day, on Treadmill, and it was great!) CD's- don't bother buying! use Spotify. I do that and its awesome-I have found novelty songs I didn't know about! Including a song by _The Doubleclicks_ which I thought was about Buffy: here. I emailed them about that it and they responded with:_Hello! Buffy, hunger games, divergent, Harry Potter, you name it._ JOURNALS- don't bother buying them, its all on arXiv (Very true in TCS, might be less true in other fields). CONFERENCES: Not sure. I think very few have paper proceedings. At one time they gave out memory sticks with all the papers on them, so that IS ownership though depends on technology that might go away. Not sure what they do now. This may make it easier to lose things since nobody has a physical copy. 7) Counterargument: Even given the points above, far more today IS being preserved than used to be. See my blog post on that here. But will that be true in the long run? 8) I began saying that I used to think future historians will have to much to look at and have to sort through lots of stuff (using quicksort?) to figure out whats true. Then I said they may lose a lot. Oddly enough, both might be true- of the stuff they DO have they will have a hard time figuring out whats true (e.g., Was Pope Leo's ugrad thesis on _Rado's Theorem for Non-Linear Equations_? No. See my blog about that falsehood getting out to the world here. Spoiler alert- it was my fault.) QUESTIONS: 1) Am I right--- will the future lose lots of stuff? 2) If so, what can we do about this? Not clear who _we_ is in that last sentence. By gasarch
09.02.2026 01:57 👍 0 🔁 0 💬 0 📌 0
Preview
Towards Efficient Data Structures for Approximate Search with Range Queries **Authors:** Ladan Kian, Dariusz R. Kowalski Range queries are simple and popular types of queries used in data retrieval. However, extracting exact and complete information using range queries is costly. As a remedy, some previous work proposed a faster principle, {\em approximate} search with range queries, also called single range cover (SRC) search. It can, however, produce some false positives. In this work we introduce a new SRC search structure, a $c$-DAG (Directed Acyclic Graph), which provably decreases the average number of false positives by logarithmic factor while keeping asymptotically same time and memory complexities as a classic tree structure. A $c$-DAG is a tunable augmentation of the 1D-Tree with denser overlapping branches ($c \geq 3$ children per node). We perform a competitive analysis of a $c$-DAG with respect to 1D-Tree and derive an additive constant time overhead and a multiplicative logarithmic improvement of the false positives ratio, on average. We also provide a generic framework to extend our results to empirical distributions of queries, and demonstrate its effectiveness for Gowalla dataset. Finally, we quantify and discuss security and privacy aspects of SRC search on $c$-DAG vs 1D-Tree, mainly mitigation of structural leakage, which makes $c$-DAG a good data structure candidate for deployment in privacy-preserving systems (e.g., searchable encryption) and multimedia retrieval.
09.02.2026 01:00 👍 0 🔁 0 💬 0 📌 0
Preview
Danish Data Science Academy postdoc positions at University of Copenhagen (apply by March 4, 2026) The Danish Data Science Academy invites applications for postdoc fellowships for visionary and ambitious young data scientists. The call is at https://ddsa.dk/postdocfellowshipcall2026/. Inquiries about opportunities in the Algorithms and Complexity Section at the University of Copenhagen are welcome and can be directed to Jakob Nordstrom or other faculty in the section. Website: https://ddsa.dk/postdocfellowshipcall2026/ Email: jn@di.ku.dk By shacharlovett
08.02.2026 19:20 👍 0 🔁 0 💬 0 📌 0
TR26-013 | Quantum–Classical Equivalence for AND-Functions | Sreejata Bhattacharya, Farzan Byramji, Arkadev Chattopadhyay, Yogesh Dahiya, Shachar Lovett A major open problem at the interface of quantum computing and communication complexity is whether quantum protocols can be exponentially more efficient than classical protocols for computing total Boolean functions; the prevailing conjecture is that they are not. In a seminal work, Razborov (2002) resolved this question for AND-functions of the form $$ F(x,y) = f(x_1 \land y_1, \ldots, x_n \land y_n), $$ when the outer function $f$ is symmetric, by proving that their bounded-error quantum and classical communication complexities are polynomially related. Since then, extending this result to all AND-functions has remained open and has been posed by several authors. In this work, we settle this problem. We show that for every Boolean function $f$, the bounded-error quantum and classical communication complexities of the AND-function $f \circ \mathrm{AND}_2$ are polynomially related, up to polylogarithmic factors in $n$. Moreover, modulo such polylogarithmic factors, we prove that the bounded-error quantum communication complexity of $f \circ \mathrm{AND}_2$ is polynomially equivalent to its deterministic communication complexity, and that both are characterized—up to polynomial loss—by the logarithm of the De Morgan sparsity of $f$. Our results build on the recent work of Chattopadhyay, Dahiya, and Lovett (2025) on structural characterizations of non-sparse Boolean functions, which we extend to resolve the conjecture for general AND-functions.
08.02.2026 07:53 👍 0 🔁 0 💬 0 📌 0
Preview
Trevisan Award for Expository Work Posted on behalf of Salil Vadhan, the award committee chair: The **Trevisan Award for Expository Work** is a new SIGACT award created in memory of Luca Trevisan (1971-2024), with a nomination deadline of **April 10, 2026.** The award is intended to promote and recognize high-impact work expositing ideas and results from the Theory of Computation. The exposition can have various target audiences, e.g. people in this field, people in adjacent or remote academic fields, as well as the general public. The form of exposition can vary, and can include books, surveys, lectures, course materials, video, audio (e.g. podcasts), blogs and other media products. The award may be given to a single piece of work or a series produced over time. The award may be given to an individual, or a small group who together produced this expository work. The awardee will receive USD 2000 (to be divided among the awardees if multiple), as well as travel support if needed to attend STOC, where the award will be presented. STOC 2026 is June 22-26 in Salt Lake City, Utah. The endowment for this prize was initiated by a gift from Avi Wigderson, drawing on his Turing Award, and has been subsequently augmented by other individuals. For more details see https://sigact.org/prizes/trevisan.html. By shuchic
08.02.2026 01:46 👍 0 🔁 0 💬 0 📌 0
Preview
Secrets of Intelligence Services <div class="captioned-image-container"><figure><a class="image-link image2" href="https://substackcdn.com/image/fetch/$s_!C8Uc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff400d641-bc71-4d19-a6da-48fb75959429_1100x219.jpeg" target="_blank"><div class="image2-inset"> <source type="image/webp" /><img alt="" class="sizing-normal" height="219" src="https://substackcdn.com/image/fetch/$s_!C8Uc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff400d641-bc71-4d19-a6da-48fb75959429_1100x219.jpeg" width="1100" /><div></div> </div></a></figure></div> <p>It is simultaneously true that coding agents are by far my favorite generative AI innovation and the most annoying to read about online. Agents profoundly democratize computing in ways similarly impactful as the personal computer and the graphical user interface. Unfortunately, they are also sold by AI companies and thus come shrouded in animistic mysticism. This leads to dumb one-day stories like Moltbook or endless annoying Twitter threads by economists on the future of labor.</p> <p>I know that any <a href="https://en.wikipedia.org/wiki/Clarke%27s_three_laws">sufficiently advanced technology is indistinguishable from magic</a>, and agents do feel magical in a way distinct from chatbots. They let you produce working software for almost any idea you have in your head, conceptually a major technological breakthrough beyond the language machine. But the funny thing about agents is that they are not particularly advanced technology in the grand scheme of things. Agents are <em>far far</em> simpler than LLMs.</p> <p>I was trying to explain to a friend how agents worked, and I realized I didn’t have a good vocabulary for the job. Reading a bunch of tutorials and looking at a bunch of open source repositories didn’t help. On a whim, I tried something silly that turned out to be far more illuminating than I expected.</p> <p>I asked Claude Code to build itself.</p> <pre><code>“Build a minimalist coding agent that exposes step by step how the agent works.”</code></pre> <p>The resulting 400 lines of overly commented Python were instructive. My “microagent” generates perfectly reasonable code, and, because Claude is sentient and clearly knows everything about me,<a class="footnote-anchor" href="#footnote-1" id="footnote-anchor-1" target="_self">1</a> the design uncannily emphasizes the power of feedback that I’ve been blogging about. The agent is simple, ordinary software in feedback with a complex, extraordinary language model. Agents are a perfect example of how the feedback interconnection of a basic, precise component with an uncertain, generalist component achieves outcomes neither component can enact on its own.</p> <p>For the purposes of this post, the human (aka me) has one job: setting the initial conditions of the feedback loop with a natural language prompt. With this prompt, the agent kicks off a sequence of interactions with the LLM. At every turn, the agent receives a message from the LLM, parses it to identify a valid action, takes the action, and then sends the LLM a message describing the outcome. It also has the option to stop acting and end the loop.</p> <p>The set of valid agent actions, called “tools” in agent land, is small, but they take you well beyond chatting. The tools execute actual actions on your computer. Software can do a surprising amount with three simple actions: read_file, write_file, and shell. The first two are self-explanatory. read_file takes as input a filename and outputs the contents. write_file takes as input a filename and content and writes the content to disk under that filename. shell executes a command-line script and returns its output. These three operations let you copy a piece of software onto your computer, compile it, and run it. That’s enough power to do <em>basically anything</em>. These plugs into the world outside the chat window are what make the agents so impressive.</p> <p>Here’s an architecture diagram of the simple agent, with labels denoting the order of the data flow. Steps 1-4 are repeated until the response tells the agent to be done.</p> <div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" href="https://substackcdn.com/image/fetch/$s_!1qgQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82920e33-fa7c-4de8-8628-1074ffded2f6_1758x1102.png" target="_blank"><div class="image2-inset"> <source type="image/webp" /><img alt="" class="sizing-normal" height="269.635989010989" src="https://substackcdn.com/image/fetch/$s_!1qgQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82920e33-fa7c-4de8-8628-1074ffded2f6_1758x1102.png" width="430" /><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"> <button class="pencraft pc-reset pencraft icon-container restack-image" tabindex="0" type="button"><svg fill="none" height="20" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5" viewBox="0 0 20 20" width="20" xmlns="http://www.w3.org/2000/svg"><g><title></title> <path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button class="pencraft pc-reset pencraft icon-container view-image" tabindex="0" type="button"><svg class="lucide lucide-maximize2 lucide-maximize-2" fill="none" height="20" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="2" viewBox="0 0 24 24" width="20" xmlns="http://www.w3.org/2000/svg"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button> </div></div> </div></a></figure></div> <p>Let me show you the gory details of how it all works in a simple example. I type the following text into the microagent software:</p> <pre><code>Create a file called test.txt with ‘hello world’</code></pre> <p>The agent now needs to send this command to the LLM. It uses JSON, not English, to do so. JSON, short for JavaScript Object Notation, is one of the core data formats on the internet. It consists of long spaghetti lists of “human readable” name-value pairs that encode data in ways that make it relatively simple for different applications to talk to each other. Here’s the agent’s message in JSON:</p> <pre><code>{ "messages": [ { "role": "Ben", "content": "Create a file called test.txt with 'hello world'" } ], "tools": [ "read_file", "write_file", "shell" ] }</code></pre> <p>This isn’t too bad to read, right? There are some curly brackets and straight brackets for grouping things. The first grouping encodes my prompt. The second grouping lists the agent’s available tools.</p> <p>The agent sends this message to Anthropic via <a href="https://platform.claude.com/docs/en/api/overview">standard web protocols</a>. All of the major AI players provide Python interfaces and JSON schemas for interacting with their language models. You send them a properly formatted package of data over an internet tube, and they will send you something back.</p> <p>In this example, Anthropic processes the JSON and returns the following:</p> <pre><code>{ "content": "I'll create a file called test.txt with the content 'hello world'.", "tool_calls": [ { "name": "write_file", "parameters": { "path": "test.txt", "content": "hello world" }, "id": "toolu_01..." } ] }</code></pre> <p>Here, the LLM’s reply includes English text designed to make you think it is conscious, which the agent dutifully prints to the screen. The LLM also sends actual code for the agent to run.</p> <p>The agent <em>expects</em> the LLM to return JSON that indicates a command to execute one of its tools. I don’t know what actually happens on the Anthropic side of the line, but it doesn’t really matter. Maybe Anthropic just takes the raw JSON and feeds it as raw tokens into a language model. Maybe Anthropic has its own simple agent that reads the JSON and dispatches prompts to the LLMs accordingly. Maybe it’s actual magic of a mystical elf using dark sorcery.</p> <p>Whatever is happening behind the curtain doesn’t matter to the agent. The agent expects a particular kind of answer back. If it’s not in a valid format, the agent can flag an error and quit. At the expense of a little more software complexity, you could add simple rules to “try again,” hoping that the internal stochasticity of the LLM might yield a valid JSON message after enough attempts. Regardless, the agent side is all just simple rules that you would use to build any internet application.</p> <p>In the simple running example here, the JSON sent back from Anthropic is perfectly fine. After displaying the English to the screen to placate me, the agent, explicitly programmed to read JSON, skips to the field “tool_calls.” It then tries to execute whatever is there. It sees the tool “write_file” and the parameters to pass as arguments to the tool’s path and content. You would not be surprised to learn that applying write_file to that content creates a file “test.txt” containing the words “hello world.”</p> <p>Once this action is successfully executed, the agent sends the following message back to the LLM:</p> <pre><code>{ "messages": [ { "role": "Ben", "content": "Create a file called test.txt with 'hello world'" }, { "role": "assistant", "content": "I'll create a file called test.txt with the content 'hello world'.", "tool_calls": [ { "name": "write_file", "id": "toolu_01..." } ] }, { "role": "tool", "content": "\u2713 Wrote 11 bytes to test.txt", "tool_use_id": "toolu_01..." } ], "tools": [ "read_file", "write_file", "shell" ] }</code></pre> <p>You now see why I put scare quotes around human readable. Even this simple example is quickly getting out of hand. However, this message reveals something surprising: in my trivial implementation, the agent and LLM do not share state. All interaction occurs through a stateless interface (a REST API), and the agent sends the entire interaction history to the LLM in every round. There are likely smarter ways to save tokens here, but those details are for another day. Agent software can create an <em>illusion</em> of state for the human, while actually just logging a long, unsummarized memory.</p> <p>Whatever the case, Anthropic munges the long message and sends back the curt reply.</p> <pre><code>{ "content": "The file test.txt has been successfully created with the content 'hello world'.", "is_final": true }</code></pre> <p>The agent sees the LLM saying “is_final” is true. This is the signal to end the interaction. The agent prints the content to the screen and closes up shop.</p> <pre><code>The file test.txt has been successfully created with the content ‘hello world’.</code></pre> <p>The simple microagent can of course do far more complex tasks than this. <a href="https://github.com/benjamin-recht/microagent">You can try it out if you want</a>. It’s not going to be as powerful or as fun as Claude Code, but it shows how far you can get with a hardcoded loop with access to a minimal set of tools. There is no machine learning or fancy “AI” in the agent. It’s boring, standard software. Even though people like to speak breathlessly about the power of agents and their apparent <em>agency</em>, agents are simple, rule-based systems that use a known, fixed JSON schema to communicate with LLMs.</p> <p>LLMs, by contrast, remain deeply weird and mysterious. I don’t care how many pictures someone draws to explain transformers. It’s been a decade, and not a single explainer of LLM architecture has helped me understand what they actually do or why. </p> <p>And yet, I don’t have to understand human language, machine language, or the training set of language models to use them in an agent loop. The agent has no idea how the LLM works, and neither do I. It is totally fine for me to think of the LLM as a stochastic parrot. I can also think of it as an omniscient god. The agent merely needs enough logic so that, when the LLM gives an unacceptable answer, the agent software can recover. </p> <div class="captioned-image-container"><figure><a class="image-link image2" href="https://substackcdn.com/image/fetch/$s_!iewq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5636755f-80e4-4013-86fe-1bf5c91ce3dd_1590x544.png" target="_blank"><div class="image2-inset"> <source type="image/webp" /><img alt="" class="sizing-normal" height="226.0837912087912" src="https://substackcdn.com/image/fetch/$s_!iewq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5636755f-80e4-4013-86fe-1bf5c91ce3dd_1590x544.png" width="661" /><div></div> </div></a></figure></div> <p>The microagent feedback loop parallels the <a href="https://www.argmin.net/p/links-and-loops">simple amplifier</a> story I told a couple of weeks ago. There we had an unpredictable but powerful amplifier, made precise by an accurate but small attenuator. In coding agents, feedback with the agent makes LLMs more predictable and verifiable. It also provides a path to translate the text output of LLMs into real, impactful actions. Agents are boring, precise, and logical. LLMs are mysterious, nondeterministic, and contain an unfathomable subset of human knowledge. In feedback, they are the most useful and engaging generative AI product yet.</p> <p class="button-wrapper"><a class="button primary" href="https://www.argmin.net/subscribe?"><span>Subscribe now</span></a></p> <div class="footnote"> <a class="footnote-number" contenteditable="false" href="#footnote-anchor-1" id="footnote-1" target="_self">1</a><div class="footnote-content"> <p>Goddamn it, I’m joking.</p> <p></p> </div> </div> <p class="authors">By Ben Recht</p>
06.02.2026 14:59 👍 0 🔁 0 💬 0 📌 0
News for January 2026 <p>After some insanely busy months, we have a merely busy month. A couple of neat results on sublinear graph algorithms (a subject dear to me!), DNF testing, and two expositions that should be of interest to our readers.</p> <p><strong>When Local and Non-Local Meet: Quadratic Improvement for Edge Estimation with Independent Set Queries</strong> by Tomer Adar, Yahel Hotam, Amit Levi (<a href="https://arxiv.org/abs/2601.21457">arXiv</a>). We’ve seen a lot of recent activity of the fundamental problem of edge estimation in graph. Given a simple, connected, undirected graph \(G = (V,E)\) with \(n\) vertices, we want to get a \((1+\varepsilon)\)-approximation to the number of edges \(m\). In the standard adjacency list access model, a classic result of <a href="https://d-nb.info/991214714/34">Goldreich-Ron</a> proves that the sample complexity of this problem is \(\Theta(n/\sqrt{m})\) (ignoring \(poly(\varepsilon^{-1} \log n)\) factors). A completely different access model is the <a href="https://arxiv.org/abs/1711.07567">Independent Set (IS) model</a>. For any set \(S\), an oracle outputs a single bit indicating whether an edge is present in \(S\). Again, in this model, the optimal bound is \(\Theta(n/\sqrt{m})\). This paper shows that with access to all oracles (standard adjacency list and IS queries), one gets a quadratic improvement and the complexity is \(\Theta(\sqrt{n/\sqrt{m}})\). </p> <p><strong>Spectral Clustering in Birthday Paradox Time</strong> by Michael Kapralov, Ekaterina Kochetkova, Weronika Wrzos-Kaminska (<a href="https://arxiv.org/abs/2601.05883">arXiv</a>). Consider a (bounded degree) graph \(G\) that is can be partitioned into \(k\) roughly equal “expanding clusters”. This means that one can remove an \(\varepsilon\)-fraction of the edges to get \(k\) connected components of size around \(n/k\), each of which has expansion at least \(\phi\). The aim is to get a sublinear oracle that determines the cluster that a vertex belongs to. The origins of this problem go back to problems in expansion testing and local expansion reconstruction. From a spectral perspective, the ideal “cluster classifier” is to simply take the bottom \(k\) eigenvectors of the Laplacian, and project every vertex onto this vector space. The corresponding embeddings of vertices in the same cluster will be close. This paper effectively implements this procedure in sublinear time; specifically \(\widetilde{O}(\sqrt{n})\) time, using a birthday paradox type collision argument to estimate the embedding similarities.</p> <p><strong>DNF formulas are efficiently testable with relative error</strong> by Xi Chen, William Pires, Toniann Pitassi, Rocco A. Servedio (<a href="https://arxiv.org/abs/2601.16076">arXiv</a>). The relative error model for Boolean functions was <a href="https://arxiv.org/abs/2410.09235">recently proposed</a> as an analogy to sparse graph testing. The standard notion of distance between two functions \(f, g: \{0,1\}^n \to \{0,1\}\) is just \(\| f – g\|_0/2^n\). But this is not meaningful when \(|f^{-1}(1)| \ll 2^n\). A natural notion of distance is to consider the sets \(f^{-1}(1)\) and \(g^{-1}(1)\) and measure their symmetric difference. One can now define distances to properties analogously. There have been numerous property testing results with this notion of distance, called the “relative error” setting. This paper considers the property of being a (small) DNF. Learning an \(s\)-term DNF requires \(n^{O(\log(s/\varepsilon))}\) queries. This paper shows that the (relative error) testing of \(s\)-term DNFs can be done in \(poly(s/\varepsilon)\) queries. </p> <p><strong>A short note on (distribution) testing lower bounds via polynomials</strong> by Clément Canonne (<a href="https://eccc.weizmann.ac.il/report/2026/009/">ECCC</a>). As the title says, this is a short note on a fundamental question of proving lower bounds for distribution testing. For your favorite distribution testing problem, to prove a lower bound, you start with a “Yes” distribution \(\mathcal{Y}\) and a “No” distribution \(\mathcal{N}\). So \(\mathcal{Y}\) would satisfy the desired property (say, uniformity) and \(\mathcal{N}\) would not. Then, you need to argue some sort of “statistical indistinguishability” of samples. So the distribution of the set \(s\) samples from \(\mathcal{Y}\) is almost the same as that from \(\mathcal{N}\). How does one prove the latter? A convenient lemma shows that if the first \(\ell\) moments of the distributions are the same, then we get a \(\Omega(k^{1-1/\ell})\) sample lower bound (\(k\) is the support size). The key insight is that such “matching moment” distributions/random variables can be generated by looking at specific univariate polynomials. It’s a really clever trick that looks at the powers of roots scaled by the derivative at those points. A really nice read overall!</p> <p><strong>Mathematical and computational perspectives on the Boolean and binary rank and their relation to the real rank</strong> by Michal Parnas (<a href="https://arxiv.org/abs/2601.13900">arXiv</a>). This is long survey on the notion of Boolean and binary rank, and an overview of their use in TCS as well as methods to bound this rank. Section 7.3 gives an excellent discussion of property testing problems on Boolean rank. It also gives some of the main open questions regarding property testing low Boolean rank.</p> <p class="authors">By Seshadhri</p>
06.02.2026 05:49 👍 0 🔁 0 💬 0 📌 0
Preview
Tight FPT Approximations for Fair $k$-center with Outliers <p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Ameet+Gadekar">Ameet Gadekar</a></p>The $k$-center problem is a fundamental clustering objective that has been extensively studied in approximation algorithms. Recent work has sought to incorporate modern constraints such as fairness and robustness, motivated by biased and noisy data. In this paper, we study fair $k$-center with outliers, where centers must respect group-based representation constraints while up to $z$ points may be discarded. While a bi-criteria FPT approximation was previously known, no true approximation algorithm was available for this problem. We present the first deterministic $3$-approximation algorithm running in fixed-parameter tractable time parameterized by $k$. Our approach departs from projection-based methods and instead directly constructs a fair solution using a novel iterative ball-finding framework, based on a structural trichotomy that enables fixed-parameter approximation for the problem. We further extend our algorithm to fair $k$-supplier with outliers and to the more general fair-range setting with both lower and upper bounds. Finally, we show that improving the approximation factor below $3$ is $\mathrm{W[2]}$-hard, establishing the optimality of our results.
06.02.2026 01:00 👍 0 🔁 0 💬 0 📌 0
Preview
Minimizing Makespan in Sublinear Time via Weighted Random Sampling <p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Bin+Fu">Bin Fu</a>, <a href="https://dblp.uni-trier.de/search?q=Yumei+Huo">Yumei Huo</a>, <a href="https://dblp.uni-trier.de/search?q=Hairong+Zhao">Hairong Zhao</a></p>We consider the classical makespan minimization scheduling problem where $n$ jobs must be scheduled on $m$ identical machines. Using weighted random sampling, we developed two sublinear time approximation schemes: one for the case where $n$ is known and the other for the case where $n$ is unknown. Both algorithms not only give a $(1+3ε)$-approximation to the optimal makespan but also generate a sketch schedule. Our first algorithm, which targets the case where $n$ is known and draws samples in a single round under weighted random sampling, has a running time of $\tilde{O}(\tfrac{m^5}{ε^4} \sqrt{n}+A(\ceiling{m\over ε}, ε ))$, where $A(\mathcal{N}, α)$ is the time complexity of any $(1+α)$-approximation scheme for the makespan minimization of $\mathcal{N}$ jobs. The second algorithm addresses the case where $n$ is unknown. It uses adaptive weighted random sampling, %\textit{that is}, it draws samples in several rounds, adjusting the number of samples after each round, and runs in sublinear time $\tilde{O}\left( \tfrac{m^5} {ε^4} \sqrt{n} + A(\ceiling{m\over ε}, ε )\right)$. We also provide an implementation that generates a weighted random sample using $O(\log n)$ uniform random samples.
05.02.2026 01:00 👍 0 🔁 0 💬 0 📌 0
Preview
Sampling the Oxford CS Library <p></p> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnxWcBppn1emnORg6NymBggXDWGSJbbqknMhVqWeE-tUjd_a0cCnHbicSLXQnA4qS9D0Jr_IJ_IwCl409KW_AzSz7OMyTQHPIHKPiLuV4Ml7flC8NjsW06WJKcsQ-uhF4WlPCx_mnQ_oW3htJafWglPx0Q-TD3u4aU3SqL98BCpSrAciw_ELDL/s4000/PXL_20260203_161652139.MP.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnxWcBppn1emnORg6NymBggXDWGSJbbqknMhVqWeE-tUjd_a0cCnHbicSLXQnA4qS9D0Jr_IJ_IwCl409KW_AzSz7OMyTQHPIHKPiLuV4Ml7flC8NjsW06WJKcsQ-uhF4WlPCx_mnQ_oW3htJafWglPx0Q-TD3u4aU3SqL98BCpSrAciw_ELDL/w150-h200/PXL_20260203_161652139.MP.jpg" width="150" /></a></div>Wandering around maze known as the Computer Science building at Oxford I found the computer science library. Rarely these days do you see a library (and a librarian) devoted to computer science. The librarian found their copy of <a href="https://goldenticket.fortnow.com/">The Golden Ticket</a> and asked me to inscribe and sign it, <a href="https://blog.computationalcomplexity.org/2014/03/spring-breaking-at-dagstuhl.html">just like at Dagstuhl</a>, perhaps the only other active CS library I know of.<br /><p></p> <p>It brought back memories of the early 90s when I would often head to the <br /> Math/CS library at the University of Chicago to track down some conference or journal paper. Now we just click and download but you miss finding something else interesting in the proceedings or the stacks in general.</p> <p></p> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9Vdizxgx84E8ZSrTvJSTCCVoQezo_iPazbden8NUzhpjboJlFk46mtKadqJJJyFI1DLm1SG5TxHgh_orePztb_0rnrHOt_kkPiAbB7SULKl0kSuiN532nmDtu1s3JsckXUvP4ZHG_A-3WwaLM9yeNdx42kKmKGdbNv-0jzPvOgZ5YEqgpp3j_/s4000/PXL_20260203_161805204.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9Vdizxgx84E8ZSrTvJSTCCVoQezo_iPazbden8NUzhpjboJlFk46mtKadqJJJyFI1DLm1SG5TxHgh_orePztb_0rnrHOt_kkPiAbB7SULKl0kSuiN532nmDtu1s3JsckXUvP4ZHG_A-3WwaLM9yeNdx42kKmKGdbNv-0jzPvOgZ5YEqgpp3j_/s320/PXL_20260203_161805204.jpg" width="320" /></a></div>I had time to kill so I wandered around the library finding memories in the stacks including the 1987 STOC Proceedings, home to my first conference paper, <a href="https://doi.org/10.1145/28395.28418">The complexity of perfect zero-knowledge</a>. The paper might be best known for my upper bound protocol which is republished here in its entirety. <p></p> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJAUGE578dxJrkWGl28feVbNvtdpkN9nABrQl5i8BT864rCwHTp5OFrD_QUAiO5VHqhV9r-HfH50fXQ9IlZ3yuAGduyOf8qpWDt8hzRX0eXLfM_fPGnVNTO3optYkGh4TJjaByijdN22rQVN3Inv3qwMhZrzIk3atimTOqOKrXUmxs-xHl3EWD/s452/upper%20bound.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="236" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJAUGE578dxJrkWGl28feVbNvtdpkN9nABrQl5i8BT864rCwHTp5OFrD_QUAiO5VHqhV9r-HfH50fXQ9IlZ3yuAGduyOf8qpWDt8hzRX0eXLfM_fPGnVNTO3optYkGh4TJjaByijdN22rQVN3Inv3qwMhZrzIk3atimTOqOKrXUmxs-xHl3EWD/w400-h236/upper%20bound.png" width="400" /></a></div> <p>That's how I wrote it nearly four decades ago, without proof just an intuition why it works. Those were the days. I did work out the full covariance argument in the <a href="https://lance.fortnow.com/papers/files/pzk.pdf">journal version</a> though I missed other <a href="https://blog.computationalcomplexity.org/2005/01/embarrassing-mistakes.html">bugs in the proof</a>. </p> <p>The upper bound requires the verifier to have a random sample of the distribution unknown to the prover. Rahul Santhanam, who is hosting my visit to Oxford, asked if the converse was known. Goldreich, Vadhan and Wigderson, in the appendix of their <a href="https://doi.org/10.1007/s00037-002-0169-0">Laconic Prover paper</a>, show a sampling protocol based on the upper bound on the size of a set, though the sample is not completely unknown to the prover. Neat to revisit questions from my first conference paper. </p> <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-tx5x886an6G4vfn0JRSbl0Wylm13nhFg2RwCmtKWUkMy5IBy7xIudqzlcJCgoa34jopKEveSnOow7P_b5nS2gKCNfLiGbdHxnUF8DYQuc-vzJW3k_wwqNrRZZ9ovFuitH1seOkXOPlWG1X55gF9x8vUCRutB6BEnNQWrmRXHYCWhz3muZzNv/s2579/PXL_20260203_161626251.jpg" style="margin-left: auto; margin-right: auto;"><img border="0" height="122" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-tx5x886an6G4vfn0JRSbl0Wylm13nhFg2RwCmtKWUkMy5IBy7xIudqzlcJCgoa34jopKEveSnOow7P_b5nS2gKCNfLiGbdHxnUF8DYQuc-vzJW3k_wwqNrRZZ9ovFuitH1seOkXOPlWG1X55gF9x8vUCRutB6BEnNQWrmRXHYCWhz3muZzNv/s320/PXL_20260203_161626251.jpg" width="320" /></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">Oxford CS Librarian Aza Ballard-Whyte</td></tr> </tbody></table> <p></p> <p class="authors">By Lance Fortnow</p>
04.02.2026 10:49 👍 0 🔁 0 💬 0 📌 0
Preview
Fully Funded PhD Position in Algorithms & Complexity at University of Birmingham (apply by February 28, 2026) <p>Fully funded PhD (3.5 years) in Algorithms &amp; Complexity at the University of Birmingham, School of Computer Science. Tuition fees covered, stipend and travel support included. Applicants should have a strong background in theoretical computer science (algorithms, complexity). Strong masters or outstanding bachelors candidates welcome. Start date: September 2026.</p> <p>Website: <a href="https://www.birmingham.ac.uk/study/postgraduate/subjects/computer-science-and-data-science-courses/computer-science-phd">https://www.birmingham.ac.uk/study/postgraduate/subjects/computer-science-and-data-science-courses/computer-science-phd</a><br /> Email: s.mukhopadhyay@bham.ac.uk</p> <p class="authors">By shacharlovett</p>
04.02.2026 10:36 👍 0 🔁 0 💬 0 📌 0
Preview
Vigemers: on the number of $k$-mers sharing the same XOR-based minimizer <p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Florian+Ingels">Florian Ingels</a>, <a href="https://dblp.uni-trier.de/search?q=Antoine+Limasset">Antoine Limasset</a>, <a href="https://dblp.uni-trier.de/search?q=Camille+Marchet">Camille Marchet</a>, <a href="https://dblp.uni-trier.de/search?q=Mika%C3%ABl+Salson">Mikaël Salson</a></p>In bioinformatics, minimizers have become an inescapable method for handling $k$-mers (words of fixed size $k$) extracted from DNA or RNA sequencing, whether for sampling, storage, querying or partitioning. According to some fixed order on $m$-mers ($m
04.02.2026 01:00 👍 0 🔁 0 💬 0 📌 0
Preview
Updates and Plans V: From Boise to Tel Aviv, Ceasefire, My 70th Birthday, Nostalgia, Problems, Outrageous Conjectures, Quantum, and AI <p>This is the fifth post of this type (<a href="https://gilkalai.wordpress.com/2008/08/13/plans-and-updates/">I (2008)</a>; <a href="https://gilkalai.wordpress.com/2011/06/14/tentative-plans-and-belated-updates-ii/">II(2011)</a>; <a href="https://gilkalai.wordpress.com/2015/08/14/updates-and-plans-iii/">III(2015)</a>; <a href="https://gilkalai.wordpress.com/2024/03/20/updates-and-plans-iv/">IV(2024</a><a href="https://gilkalai.wordpress.com/2024/03/20/updates-and-plans-iv/">)</a>).</p> <h2>Between Boise and Tel Aviv</h2> <p>During the summer we spent two months in the lovely city of Boise, Idaho. We stayed with my son Hagai and his husband Felix, their one-and-a-half-year-old son Yonatan, and—one week after our arrival—their second son, Rafael, was born. I was visiting Boise State University and was hosted by Zach Teitler, whom I first met many years ago at Texas A&amp;M.</p> <p>Boise is a beautiful city with wonderful parks, and Mazi and I also devoted a week to visiting Yellowstone for the first time.</p> <p><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/fam25.jpg"><img alt="" class="alignnone wp-image-31109" height="490" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/fam25.jpg?w=560" width="268" /></a></p> <p><span style="color: #ff0000;">On the flight back with Hagai, Rafael, Mazi, and Yonatan</span></p> <h2>Ceasefire in Gaza</h2> <p>On September 29, 2025, US president Donald Trump put forward a 21-point plan for a ceasefire in Gaza, as part of a broader initiative toward peace in the Middle East. The plan was endorsed by many world leaders, including Arab and Muslim leaders, as well as by the Israeli government headed by Benjamin Netanyahu. On October 9, an agreement was reached between Israel and Hamas on a ceasefire, a partial withdrawal of Israeli forces, and the release of kidnapped Israelis. On October 13, all living kidnapped Israelis were released. By January 27, 2026, all bodies of Israelis were returned.</p> <p>The end of this terrible war is certainly a source of relief and hope. Still, we are living in dangerous and tragic times, with much uncertainty.</p> <h2>My 70th Birthday</h2> <p>We landed back in Tel Aviv on October 1, the eve of Yom Kippur. The following day—somewhat to my surprise—was my 70th birthday. Two weeks later, on Simchat Torah (October 14), we gathered to celebrate the holiday, the birthday, the new family member, and the end of the war. Being 70 years old feels sort of strange. </p> <p><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/img-20251018-wa00012.jpg"><img alt="" class="alignnone wp-image-31110" height="234" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/img-20251018-wa00012.jpg?w=640" width="508" /></a></p> <h2>Nostalgia Corner and Congratulations to Benjy Weiss</h2> <p><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/wi-mo.png"><img alt="" class="alignnone wp-image-31177" height="286" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/wi-mo.png" width="177" /></a></p> <p><span style="color: #ff0000;">“secretly jubilant”</span> </p> <p>I recently found, in my sister’s home (where we both lived as children), a <em>Jerusalem Post</em> article from 1970 about the Israeli Mathematical Olympiad. In that competition I received a consolation prize, while my friend <a href="https://en.wikipedia.org/wiki/Ron_Donagi">Ron Donagi</a> received the first price. Here is a quote from the article: ” ‘The prize is much more than I expected,’ stated an apparently indifferent yet secretly jubilant Gil.”</p> <p>The reporter, Mark Daniel Sacks, also expressed his wish for similar encouragement for “those of us who are interested in literature, poetry, philosophy, and art.” I fully agree!</p> <p>A few months earlier, in the fall of 1969, I began attending Benjy Weiss’s year-long mathematics course for high-school students, together with 20–25 other students, including <a href="https://cee.technion.ac.il/en/members/agnon/">Yehuda Agnon</a>, <a href="http://www.cs.huji.ac.il/~benor/">Michael Ben-Or</a>, <a href="https://en-exact-sciences.tau.ac.il/profile/lehrer" rel="noopener" target="_blank" title="Ehud Lehrer">Ehud Lehrer</a>, <a href="https://www.bc.edu/bc-web/schools/morrissey/departments/economics/people/faculty-directory/uzi-segal.html">Uzi Segal</a> , <a href="https://en.wikipedia.org/wiki/Yonatan_Stern">Yonatan Stern</a>, and <a href="http://www.cs.huji.ac.il/~werman/" rel="noopener" target="_blank" title="Mike Werman">Mike Werman.</a> The course was an eye-opener for all of us.</p> <p>It has just been announced that our teacher, Benjy Weiss, has won the 2026 Israel Prize in Mathematics. Heartfelt congratulations, Benjy!</p> <h2>Problems, Problems </h2> <p>Over the years I have devoted quite a few posts—here, on other blogs, and on MathOverflow—to open problems. In 2013, at the <a href="http://www.renyi.hu/conferences/erdos100/">Erdős Centennial conference</a>, I gave a lecture on old and new problems, mainly in combinatorics and geometry (<a href="https://gilkalai.wordpress.com/wp-content/uploads/2013/07/erdos100-20.pdf">here are the slides</a>), where I presented twenty problems that are also listed in <a href="https://gilkalai.wordpress.com/2013/07/06/some-old-and-new-problems-in-combinatorics-and-geometry/">this post</a>. Since then, there has been substantial progress, and in some cases full solutions, for roughly 30% of them.  </p> <p>I gradually plan, somewhat in Erdős’ tradition, to upgrade my problem posts and lectures into papers.</p> <p>So far, in 2015 I wrote <a href="https://arxiv.org/abs/1505.04952">a paper</a> around Borsuk’s problem. (Some of the problems appeared in <a href="https://gilkalai.wordpress.com/tag/borsuks-conjecture/">these posts</a>.) In 2022, Imre Barany and I wrote a survey article on <a href="https://www.google.com/url?sa=t&amp;source=web&amp;rct=j&amp;opi=89978449&amp;url=https://www.ams.org/bull/2022-59-04/S0273-0979-2021-01753-8/S0273-0979-2021-01753-8.pdf&amp;ved=2ahUKEwi9gbaCtrGSAxURRqQEHRTAIrQQFnoECCkQAQ&amp;usg=AOvVaw37HeOdq-BfeM6-WYkf7LYu">Helly-type problems</a>, which was nicely accepted.  I am currently writing a paper about the diameter problem for graphs of polytopes. We devoted many posts—and <a href="https://gilkalai.wordpress.com/category/polymath3/">Polymath 3</a>—to this problem, and I plan to submit the paper to the new and splendid journal JOMP: <a href="https://jomprob.org/index.php/jomp">the Journal of Open Math Problems</a>.</p> <h3>Geometric Combinatorics</h3> <p>There are many open problems that I like—and quite a few that I myself posed—concerning the combinatorial theory of convex polytopes, face numbers of polytopes, and related cellular objects. <a href="https://gilkalai.wordpress.com/2008/05/07/five-open-problems-regarding-convex-polytopes/">This post</a> from 2008 lists five elementary problems and since then one problem was solved. One outstanding problem in the field that I like is whether triangulated spheres are determined from their dual graphs. This is known for simplicial polytopes (see <a href="https://gilkalai.wordpress.com/2009/01/16/telling-a-simple-polytope-from-its-graph/">this post</a> from 2009) and was recently proved for all shellable simplicial spheres by  Yirong Yang in her paper <a href="https://arxiv.org/abs/2401.04220"><span class="search-hit mathjax">Reconstructing</span> a Shellable Sphere from its Facet-Ridge Graph</a>. </p> <p>Let me mention two problems from other areas of combinatorial geometry.</p> <p>Two triangles are called <em>almost disjoint</em> if they are either disjoint or their intersection consists of one common vertex. Let <em>f(n)</em> denote the maximum number of pairwise almost disjoint triangles that can be found on some vertex set of <em>n</em> points in 3-space. How large can <em>f(n)</em> be? It is easy to see that <em>f(n)</em> is at most quadratic in <img alt="n" class="latex" src="https://s0.wp.com/latex.php?latex=n&amp;bg=ffffff&amp;fg=333333&amp;s=0&amp;c=20201002" /> and the <a href="http://chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://link.springer.com/content/pdf/10.1007/s00454-002-2888-z.pdf">best lower bound</a> from 2002 by Karolyi and Solmyosi is <img alt="f(n)=\Omega (n^{3/2})" class="latex" src="https://s0.wp.com/latex.php?latex=f%28n%29%3D%5COmega+%28n%5E%7B3%2F2%7D%29&amp;bg=ffffff&amp;fg=333333&amp;s=0&amp;c=20201002" />.  There is a <a href="https://arxiv.org/abs/1705.01272">related work</a> from 2017 by Solmyosi and Wong. </p> <p>In 1995, Nati Linial and I conjectured that the kissing number for lattice sphere packings in <img alt="\mathbb R^n" class="latex" src="https://s0.wp.com/latex.php?latex=%5Cmathbb+R%5En&amp;bg=ffffff&amp;fg=333333&amp;s=0&amp;c=20201002" /> is subexponential in <img alt="n" class="latex" src="https://s0.wp.com/latex.php?latex=n&amp;bg=ffffff&amp;fg=333333&amp;s=0&amp;c=20201002" />. The highest known kissing number behaves like <img alt="n^{\log n}" class="latex" src="https://s0.wp.com/latex.php?latex=n%5E%7B%5Clog+n%7D&amp;bg=ffffff&amp;fg=333333&amp;s=0&amp;c=20201002" />. Our problem was related to the question of finding <em>upper bounds</em> for the density of sphere packings in high dimension. Recent <a href="https://gilkalai.wordpress.com/2025/04/09/boaz-klartag-striking-new-lower-bounds-for-sphere-packing-in-high-dimensions/">celebrated work</a> of Klartag shows an intriguing connection between kissing numbers and <em>lower bounds</em> on sphere-packing density.</p> <h3>Analysis of Boolean Functions and Probabilistic Combinatorics</h3> <p>In a <a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/fourier.pdf">draft paper from 2000</a> (which I mostly distributed privately), I listed 18 interesting phenomena and 23 problems around these phenomena related to Boolean functions and their Fourier expansion. Since then there were many developments in the analysis of Boolean functions. Here is a <a href="https://simons.berkeley.edu/sites/default/files/openprobsmerged.pdf">comprehensive list</a> of open problems from 2014. One problem in the list was <a href="https://arxiv.org/abs/2510.20013v1">recently solved</a> by GPT5. I myself posed quite a few problems in this area but let me mention today the still open Aaronson-Ambainis conjecture from 2008: for every function <img alt="f:\{-1,1\}^n\to [-1,1]" class="latex" src="https://s0.wp.com/latex.php?latex=f%3A%5C%7B-1%2C1%5C%7D%5En%5Cto+%5B-1%2C1%5D&amp;bg=ffffff&amp;fg=333333&amp;s=0&amp;c=20201002" /> of degree at most <img alt="k" class="latex" src="https://s0.wp.com/latex.php?latex=k&amp;bg=ffffff&amp;fg=333333&amp;s=0&amp;c=20201002" />, there exists a variable <img alt="k" class="latex" src="https://s0.wp.com/latex.php?latex=k&amp;bg=ffffff&amp;fg=333333&amp;s=0&amp;c=20201002" /> with influence at least <img alt="(V(f)/k)^C" class="latex" src="https://s0.wp.com/latex.php?latex=%28V%28f%29%2Fk%29%5EC&amp;bg=ffffff&amp;fg=333333&amp;s=0&amp;c=20201002" />, for some constant <img alt="C" class="latex" src="https://s0.wp.com/latex.php?latex=C&amp;bg=ffffff&amp;fg=333333&amp;s=0&amp;c=20201002" />.  <img alt="V(f)" class="latex" src="https://s0.wp.com/latex.php?latex=V%28f%29&amp;bg=ffffff&amp;fg=333333&amp;s=0&amp;c=20201002" /> stands for the variance of <img alt="f" class="latex" src="https://s0.wp.com/latex.php?latex=f&amp;bg=ffffff&amp;fg=333333&amp;s=0&amp;c=20201002" />. </p> <p>In probabilistic combinatorics, the “Kahn-Kalai conjecture” from our 2006 paper has been <a href="https://arxiv.org/abs/2203.17207">famously solved</a> by Park and Pham and a second conjecture about graphs <a href="https://arxiv.org/abs/2508.14269">was settled</a> – up to <img alt="\log^2 n" class="latex" src="https://s0.wp.com/latex.php?latex=%5Clog%5E2+n&amp;bg=ffffff&amp;fg=333333&amp;s=0&amp;c=20201002" /> factor by Dubroff, Kahn, and Park. </p> <p>Jeff Kahn and I regarded the conjecture as outrageous—and likely false—but in that paper we formulated several specific conjectures (in the area of discrete isoperimetric inequalities) as part of a broader program for proving it. In spite of <a href="https://www.ams.org/journals/jams/2024-37-01/S0894-0347-2023-01027-6/">some substantial progress</a>, these conjecture remain largely open, although a few have been refuted. One of those conjectures is presented in <a href="https://mathoverflow.net/questions/10799/optimal-monotone-families-for-the-discrete-isoperimetric-inequality">this MO post</a>. In principle, the Kruskal-Katona theorem should suffice to settle this problem, but still we cannot solve it. </p> <h3>Extremal Combinatorics</h3> <p>One question I <a href="https://mathoverflow.net/questions/114646/intersecting-family-of-triangulations">asked</a>—independently also posed by Karen Meagher—concerned the independence numbers of intersection graphs of triangulations. This conjecture is still open and it admits a lovely <a href="https://arxiv.org/abs/1710.02518">generalization for a large class of polytopes</a>. Recently, Anton Molnar, Cosmin Pohoata, Michael Zheng, and Daniel G. Zhu raised the question of finding the chromatic number of the intersection graphs of triangulations—<a href="https://arxiv.org/abs/2510.27689">and solved it</a>! They showed that the Kneser graph of triangulations of a convex <em>n</em>-gon has chromatic number <img alt="n-2" class="latex" src="https://s0.wp.com/latex.php?latex=n-2&amp;bg=ffffff&amp;fg=333333&amp;s=0&amp;c=20201002" />.<span class="MathJax" id="MathJax-Element-2-Frame"><span class="math" id="MathJax-Span-4"> </span></span></p> <h3>Computation Complexity and Number Theory</h3> <p>Around 2010, I formulated several conjectures relating computational complexity and number theory, which led to some very nice developments. Together with Mrinal Kumar and Ben Lee Volk, I plan to write a paper with further problems connecting algebraic circuit complexity and number theory.</p> <h2>Two Outrageous Conjectures</h2> <p>Here are two very outrageous conjectures that may well admit simple refutations.(Comments are welcome; the right thing to do would be to devote a separate post to each of then, stay tuned.)  </p> <p>The first outrageous conjecture is presented in this slide from a 2024 lecture. </p> <p><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/zfc-nt.png"><img alt="" class="alignnone wp-image-31154" height="312" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/zfc-nt.png?w=640" width="541" /></a></p> <p>See also this <a href="https://mathoverflow.net/questions/135948/infinitely-many-primes-and-mobius-randomness-in-sparse-sets">MO question</a> and <a href="https://mathoverflow.net/questions/27755/knuths-intuition-that-goldbach-might-be-unprovable">this one</a>.</p> <p>The second vague and outrageous conjecture (already mentioned earlier in <a href="https://gilkalai.wordpress.com/2013/08/08/poznan-random-structures-and-algorithms-2013/">this post</a>) is about computational complexity and more precisely about Papadimitriou’s computational hierarchy for mathematical proofs. It asserts that theorems guaranteeing the existence  (for sure, not just with high probability) of combinatorial structures and whose proofs are based on the probabilistic method,  are accompanied by an efficient algorithm (possibly randomized) for finding this structures.  (In other words, the probabilistic method does not lead to a new Papadimitriou class beyond <strong>P</strong>.)</p> <h2>Quantum Information and Quantum Physics</h2> <p>It is likely that the proportion of posts dealing with quantum computing and quantum physics will increase. So far, they account for about 8% of all posts since I began blogging. My interest in this area has branched into several related directions.</p> <h3>The Argument Against Quantum Computing</h3> <p>The direction closest to my heart is <a href="https://gilkalai.wordpress.com/2020/12/29/the-argument-against-quantum-computers-a-very-short-introduction/">the argument against quantum computing</a>. I have invested considerable effort in explaining and discussing my theory for why quantum computers are inherently impossible—through papers, lectures, debates, and blog posts. I try not to oversell the case, and I think that ultimately, experiments are likely to provide the clearest way to decide the matter.</p> <h3>Correlated Errors </h3> <p>A related but distinct issue concerns the modeling of correlated errors, which was central in my research between 2005 and 2012, and more generally the behavior (and modeling) of noisy quantum systems that do not exhibit quantum fault tolerance. Here too, experiments and simulations can provide significant insight, and my (admittedly bold) conjectures about error correlations could be tested directly.</p> <h3>Statistical Analysis of Experimental Data</h3> <p>Another topic is the statistical analysis of current experimental data. With my coauthors we devoted substantial effort to analyzing Google’s 2019 experiment, and I believe more can be done to clarify and explain the findings of our papers. Our long-going project is closely related to developing statistical tools for analyzing quantum measurements and modeling noise. A recent paper on this topic by another group is: <a href="https://arxiv.org/abs/2510.09919">How much can we learn from quantum random circuit sampling?</a> by Manole et al.</p> <h3>Quantum Puzzles</h3> <p>I also plan a series of posts devoted to quantum puzzles related to quantum information and computation. The <a href="https://gilkalai.wordpress.com/2025/02/21/majorana-zero-modes-and-topological-qubits/">first post</a> concerned Majorana zero modes. Whether Majorana zero modes can in fact be created remains a major mystery in physics, and I personally suspect the answer may be negative. (As with “quantum supremacy,” their realization has been claimed by several research groups.) Planned follow-up posts will address quantum cryptography and the time–energy uncertainty principle.</p> <h3>Free Will</h3> <p>I plan to return to the fascinating connections between quantum physics, computation, and free will. I wrote a paper on this topic in 2021, and we discussed it in <a href="https://gilkalai.wordpress.com/2021/08/18/to-cheer-you-up-in-difficult-times-29-free-will-predictability-and-quantum-computers/">this blog post</a>. Since then, I participated in two conferences in Nazareth, in 2022 and 2024, devoted to free will (here are the <a href="https://www.youtube.com/channel/UCakLAZnGwNU58VHd4AkUJeA/videos">videotaped lectures</a> – in Hebrew). Following these conference and my paper, I have had many stimulating discussions with colleagues from a wide variety of disciplines. </p> <h3>Is Quantum Computational Advantage Manifested by Nature? Has it been Achieved by Experiments?</h3> <p>This question lies at the heart of the matter and connects to all the topics above. In a recent lecture, <a href="https://phsites.technion.ac.il/avron/">Yosi Avron</a> mentioned an argument—possibly going back to Feynman—that quantum physics in Nature already exhibits “quantum supremacy”: computing the magnetic moments of the proton or neutron from first principles is extraordinarily difficult and yields estimates far from experimental values, yet protons and neutrons “compute” their magnetic moments effortlessly. In the same lecture, delivered at a celebratory meeting for the 100th anniversary of quantum mechanics at the Open University in Ra’anana, Yosi also argued that no country can afford to lag behind in quantum computation, drawing an analogy with nuclear capabilities.</p> <h2>Computers, AI and Mathematics</h2> <p>Like many others, I plan to experiment with modern AI tools in the hope of using them for meaningful mathematical research. I am cautiously optimistic—perhaps naïve. Let’s see how it goes.</p> <h2>Pictures</h2> <p><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250818_102009.jpg"><img alt="" class="alignnone wp-image-31126" height="214" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250818_102009.jpg?w=640" width="286" />   </a><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250805_182317.jpg"><img alt="" class="alignnone wp-image-31135" height="210" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250805_182317.jpg?w=640" width="281" /></a></p> <p><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250830_172855.jpg"><img alt="" class="alignnone wp-image-31127" height="203" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250830_172855.jpg?w=640" width="271" />  </a><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250830_150254.jpg"><img alt="" class="alignnone wp-image-31128" height="205" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250830_150254.jpg?w=640" width="274" /></a></p> <p><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250901_102846.jpg"><img alt="" class="alignnone wp-image-31130" height="138" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250901_102846.jpg?w=640" width="299" />  </a><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250901_111019.jpg"><img alt="" class="alignnone wp-image-31129" height="137" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250901_111019.jpg?w=640" width="297" /></a></p> <p><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250902_122042.jpg"><img alt="" class="alignnone wp-image-31131" height="130" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250902_122042.jpg?w=640" width="282" />  </a><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/screenshot_20250902_103045_gallery.jpg"><img alt="" class="alignnone wp-image-31132" height="132" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/screenshot_20250902_103045_gallery.jpg?w=640" width="286" /></a></p> <p><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/artpoint1.png"><img alt="" class="alignnone wp-image-31159" height="138" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/artpoint1.png?w=640" width="283" />  </a><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250901_091848.jpg"><img alt="" class="alignnone wp-image-31160" height="135" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250901_091848.jpg?w=640" width="293" /></a></p> <p><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250824_114037.jpg"><img alt="" class="alignnone wp-image-31134" height="177" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/20250824_114037.jpg?w=640" width="237" />   </a><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/rabbit.png"><img alt="" class="alignnone wp-image-31136" height="174" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/rabbit.png?w=640" width="296" /></a></p> <p><a href="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/img-20260127-wa0009.jpg"><img alt="" class="alignnone wp-image-31138" height="179" src="https://gilkalai.wordpress.com/wp-content/uploads/2026/01/img-20260127-wa0009.jpg?w=640" width="388" /></a></p> <p><span style="color: #ff0000;">Top row: Boise with Zach Teitler, Alexander Woo and Bruce Sagan’s classical book, and with local convex polytopes. Second row:  sightseeing near Boise. Third, fourth, and fifth rows: Yellowstone. Sixth row: Yonatan in Boise. Seventh row: Mazi and I with Ilan and Yoav in Tel Aviv. </span></p> <p> </p> <p> </p> <p class="wp-block-paragraph"></p> <p class="authors">By Gil Kalai</p>
03.02.2026 19:32 👍 0 🔁 0 💬 0 📌 0
Preview
All Downhill From Here <div class="captioned-image-container"><figure><a class="image-link image2" href="https://substackcdn.com/image/fetch/$s_!odJ4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe016a1d0-a56f-42cd-b0b9-e90157ceb7ac_1100x219.jpeg" target="_blank"><div class="image2-inset"> <source type="image/webp" /><img alt="" class="sizing-normal" height="219" src="https://substackcdn.com/image/fetch/$s_!odJ4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe016a1d0-a56f-42cd-b0b9-e90157ceb7ac_1100x219.jpeg" width="1100" /><div></div> </div></a></figure></div> <p><em>This is a live blog of Lecture 2 of my graduate seminar “Feedback, Learning, and Adaptation.” A table of contents is <a href="https://www.argmin.net/p/feedback-learning-and-adaptation-d6c">here</a>.</em></p> <p>The analysis in the <a href="https://www.argmin.net/p/matters-of-life-and-death">last post</a> hinted that we can use calculus to analyze the local behavior of a homeostatic system around its setpoint. I wrote up the details in <a href="https://people.eecs.berkeley.edu/~brecht/cs294_s26/homeostasis.pdf">these lecture notes</a>. As long as the first terms of the Taylor series provide a reasonable approximation of the equations that define the dynamical system, we can use linear algebra to reason about how a homeostatic system maintains its steady state.</p> <p>The problem with this analysis by linear proxy is that we need to somehow account for the error in the approximation. Such bookkeeping tends to be much more annoying. Determining the region of state space under which a Taylor series is accurate always amounts to frustrating calculations. These calculations also tend to be highly tuned to the particulars of the differential structure of the model. If the model slightly changes, you have to start all over again and rederive new error bounds.</p> <p>To get around this sort of linear proxy analysis. Lyapunov invented an alternative method, called his second method or his <em>direct</em> method (I got the direct and indirect methods confused yesterday). To avoid having to create a mnemonic for what direct and indirect mean, I’m going to switch to descriptive terms for Lyapunov’s two methods: the method of linearization and the method of <em>potential functions.</em></p> <p>The method of potential functions is inspired by physics. The goal is to define a notion of “energy” for any possible state, and then show that energy dissipates as the dynamics unravel into the future. Mathematically, the method seeks a function that maps states to positive scalars. This function should be large far from the fixed point. It should equal zero at the fixed point and only at the fixed point. And the function should decrease along the trajectories of the dynamical system. In other words, the function must take on a lower value at time t+1 than it held at time t. Such a function is called a potential function (also often called a <em>Lyapunov function</em>).</p> <p>You can already see that this construction should verify convergence to the fixed point. If potential decreases at every time step but is always positive, it eventually has to get to zero. The only place where the potential is zero is the fixed point. Therefore, the system has to converge to the fixed point. You can make this as rigorous as you’d like, but I find the intuition here easier than thinking about linearizations.</p> <p>Proofs using potential functions are easy. <em>Finding</em> potential functions is hard. It’s an interesting mathematical maneuver: we have a proof technique that always works as long as you produce a particular certificate (the potential function). We thus shift the burden of proof to finding and verifying that the certificate satisfies a list of desiderata. This turns proof into a constraint satisfaction problem, one that is amenable to computer search.</p> <p>Let me give a simple case in linear systems that demonstrates how this logical transformation works. We’ll do much more interesting nonlinear cases in the next class.</p> <p>Suppose we’d like to show all trajectories of a linear dynamical system</p> <div class="captioned-image-container"><figure><a class="image-link image2" href="https://substackcdn.com/image/fetch/$s_!dAQ5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b7c65d2-8735-4885-bc44-6c57f33c582a_118x26.png" target="_blank"><div class="image2-inset"> <source type="image/webp" /><img alt="" class="sizing-normal" height="26" src="https://substackcdn.com/image/fetch/$s_!dAQ5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b7c65d2-8735-4885-bc44-6c57f33c582a_118x26.png" width="118" /><div></div> </div></a></figure></div> <p>converge to zero. From your first class on controls, you know that you can just compute the eigenvalues of A and make sure their magnitudes are all less than one. But let’s find a potential function that also certifies convergence. I need a family of functions that are positive everywhere except at the origin, where they are equal to zero. One simple family would be the strongly convex quadratics,</p> <div class="captioned-image-container"><figure><a class="image-link image2" href="https://substackcdn.com/image/fetch/$s_!U-43!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4f96291-8319-45c6-8434-e9f366eefcbc_146x28.png" target="_blank"><div class="image2-inset"> <source type="image/webp" /><img alt="" class="sizing-normal" height="28" src="https://substackcdn.com/image/fetch/$s_!U-43!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4f96291-8319-45c6-8434-e9f366eefcbc_146x28.png" width="146" /><div></div> </div></a></figure></div> <p>where P is a positive definite matrix with all eigenvalues greater than zero. If I want to show that the potential decreases along trajectories, I need</p> <div class="captioned-image-container"><figure><a class="image-link image2" href="https://substackcdn.com/image/fetch/$s_!vVQr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7aaf44b3-d096-42a1-84c9-b22ebf1b41f1_246x24.png" target="_blank"><div class="image2-inset"> <source type="image/webp" /><img alt="" class="sizing-normal" height="24" src="https://substackcdn.com/image/fetch/$s_!vVQr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7aaf44b3-d096-42a1-84c9-b22ebf1b41f1_246x24.png" width="246" /><div></div> </div></a></figure></div> <p>for all x. This is equivalent to the matrix inequality</p> <div class="captioned-image-container"><figure><a class="image-link image2" href="https://substackcdn.com/image/fetch/$s_!K0t4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffef88c2f-8ed8-4aa7-b201-8814235fe628_190x26.png" target="_blank"><div class="image2-inset"> <source type="image/webp" /><img alt="" class="sizing-normal" height="26" src="https://substackcdn.com/image/fetch/$s_!K0t4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffef88c2f-8ed8-4aa7-b201-8814235fe628_190x26.png" width="190" /><div></div> </div></a></figure></div> <p>I have reduced stability analysis to solving a system of <em>linear matrix inequalities.</em> The set of Lyapunov functions of this form is convex. And you can use techniques from <a href="https://www.argmin.net/p/convex-optimization-live-blog">convex optimization</a> to search for the potential function.</p> <p>Now, as written so far, this seems to have turned an annoying linear algebra problem (computing eigenvalues) into an annoying convex optimization problem (semidefinite programming). Fair! But the potential function method is far more extensible. For example, suppose the system were uncertain and could evolve according to either A1 or A2 at any given time. Then you can try to find a potential function that certifies both matrices. If one exists, then the global system will be stable, even if it’s switching. The appeal of the potential function method is this sort of robustness. It lets us handle inaccurate or uncertain dynamics in ways that linearization doesn’t. In the next lecture, we’ll apply these ideas to PID controllers and draw some interesting connections between analyzing the most ubiquitous control policies and the most ubiquitous optimization methods.</p> <p class="button-wrapper"><a class="button primary" href="https://www.argmin.net/subscribe?"><span>Subscribe now</span></a></p> <p class="authors">By Ben Recht</p>
03.02.2026 15:09 👍 0 🔁 0 💬 0 📌 0
TR26-012 | Perfectly Satisfiable Systems of Linear Equations and Fixed Weight Solutions | Johan Håstad <p>We study systems of linear equations modulo two in $n$ variables with three variables in each equation. We assume that the system has a solution with $pn$ variables taking the value 1 for some value $00$ it is hard to find a solution of the same weight that satisfies at least a fraction $c_p +\delta$ of the equations. The constant $c_p$ is upper bounded by $.9$ for any value of $p$.</p>
03.02.2026 11:46 👍 0 🔁 0 💬 0 📌 0
Preview
The SPARSE-Relativization Framework and Applications to Optimal Proof Systems <p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Fabian+Egidy">Fabian Egidy</a></p>We investigate the following longstanding open questions raised by Krajíček and Pudlák (J. Symb. L. 1989), Sadowski (FCT 1997), Köbler and Messner (CCC 1998) and Messner (PhD 2000). Q1: Does TAUT have (p-)optimal proof systems? Q2: Does QBF have (p-)optimal proof systems? Q3: Are there arbitrarily complex sets with (p-)optimal proof systems? Recently, Egidy and Glaßer (STOC 2025) contributed to these questions by constructing oracles that show that there are no relativizable proofs for positive answers of these questions, even when assuming well-established conjectures about the separation of complexity classes. We continue this line of research by providing the same proof barrier for negative answers of these questions. For this, we introduce the SPARSE-relativization framework, which is an application of the notion of bounded relativization by Hirahara, Lu, and Ren (CCC 2023). This framework allows the construction of sparse oracles for statements such that additional useful properties (like an infinite polynomial-time hierarchy) hold. By applying the SPARSE-relativization framework, we show that the oracle construction of Egidy and Glaßer also yields the following new oracle. O1: No set in PSPACE\NP has optimal proof systems, $\mathrm{NP} \subsetneq \mathrm{PH} \subsetneq \mathrm{PSPACE}$, and PH collapses We use techniques of Cook and Krajíček (J. Symb. L. 2007) and Beyersdorff, Köbler, and Müller (Inf. Comp. 2011) and apply our SPARSE-relativization framework to obtain the following new oracle. O2: All sets in PSPACE have p-optimal proof systems, there are arbitrarily complex sets with p-optimal proof systems, and PH is infinite Together with previous results, our oracles show that questions Q1 and Q2 are independent of an infinite or collapsing polynomial-time hierarchy.
03.02.2026 01:00 👍 0 🔁 0 💬 0 📌 0
Preview
7 Assistant Professor positions at the University Warsaw, Poland (apply by Feb 20)) at University of Warsaw (apply by February 20, 2026) <p>Multiple Assistant Professor positions in CS are available at the University of Warsaw (UW), including two with reduced teaching and increased salary. UW has one of the leading CS institutes in Europe with excellent students (highly successful in ACM ICPC) and strong research teams, especially in algorithms, logic and automata, and algorithmic economy (7 ERC grants in CS running at the moment).</p> <p>Website: <a href="https://jobs.uw.edu.pl/en-gb/offer/WMIM_2026/field/ADIUNKT/">https://jobs.uw.edu.pl/en-gb/offer/WMIM_2026/field/ADIUNKT/</a><br /> Email: Filip Murlak ; Oskar Skibski</p> <p class="authors">By shacharlovett</p>
02.02.2026 17:38 👍 0 🔁 0 💬 0 📌 0
Preview
Matters of Life and Death <div class="captioned-image-container"><figure><a class="image-link image2" href="https://substackcdn.com/image/fetch/$s_!fBX4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F107dfae6-620c-4c8e-a6cd-e13ffc2021be_1100x219.jpeg" target="_blank"><div class="image2-inset"> <source type="image/webp" /><img alt="" class="sizing-normal" height="219" src="https://substackcdn.com/image/fetch/$s_!fBX4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F107dfae6-620c-4c8e-a6cd-e13ffc2021be_1100x219.jpeg" width="1100" /><div></div> </div></a></figure></div> <p>When I was first trying to wrap my head around the subject, Sasha Megretski, in his ineffably Russian way, told me control theory was the study of death. Everything in control theory is about ensuring things “retvrn” to the origin. Most controls text books open this way, drowning you in dense formalism of matrices, rational functions, and integral operators that all more or less certifies a dynamical system will converge to zero. Control theorists call this “stability,” and it’s hard to argue that it’s not the core of the field.</p> <p>But the hyperfocus on stability undersells what the mathematics initially set out to capture. “Zero” suggests an <em>equilibrium</em> where things will converge without effort. But the origin is almost never a “dead state” in control applications. Instead, zero refers to a steady state <em>far</em> from equilibrium requiring vast resources to maintain. Control theory studies of <em>homeostasis,</em> not heat death.</p> <p>Homeostasis is the term physiologists and physicians use to describe the various biological processes inside an organism that work overtime to maintain constancy of physiological quantities. For example, our bodies maintain tightly regulated concentrations of oxygen, electrolytes, and glucose in our bloodstream. To keep our such constant constancy, many other systems are in constant action. Heart rate varies, hormones modulate, muscle fibers tear, neurons signal. Vast amounts of energy are consumed and transformed to keep everything working no matter what threatening conditions we might encounter.</p> <p>Many of the core ideas of control theory are themselves inspired by the human body’s incredible system of homeostatic regulation. The interplay between control and physiology in the twentieth century deserves its own blog post (or five-volume book). Control theory has always been a biomimetic study of life, not death.</p> <p>In that spirit, let me motivate stability with homeostasis. Let’s assume we have a system working to maintain a setpoint. The system is designed to keep some signal as close to constant as possible. I’ll call this hopefully constant signal the <em>reguland. </em>The system experiences exogenous <em>disturbances</em> that might change its course and disrupt the constancy of the reguland. The system has a <em>state</em>, a collection of variables that at each fixed time predicts the system’s future. The state and disturbance at a particular time determine the next state according to some rule:</p> <pre><code>next_state = F(state, disturbance)</code></pre> <p>The reguland can be measured, and is a manifestation of the current system state. That is, we can write the value of reguland as a function of the current state.</p> <pre><code>reguland = G(state)</code></pre> <p>For any constant value of the disturbance, we’d like conditions that guarantee the system settles to a state where the reguland equals a specified setpoint level. No matter what the disturbance, the system has to converge to the <em>same</em> value of the reguland, but this might require a different state value for every value of the disturbance.</p> <p>The goal of control analysis is to find conditions on the maps F and G that guarantee such a steady state is possible and robust to further disturbances. One of the most basic analyses uses calculus. If we assume that F and G are differentiable, then the implicit function theorem guarantees there is a value of the state that maintains the setpoint. This state value is one determined by the value of the disturbance and can be computed from the derivatives of F and G.</p> <p>These derivatives also tell us something about the system dynamics near the setpoint. If we start at a fixed point associated with a “normal” environmental disturbance, and nature slightly changes, we can approximate the convergence to the new fixed point using <em>linearization</em>. Linearization assumes the dynamics are well approximated by the linear model defined by the Taylor series approximations of F and G at the fixed point. From the linearization, we can derive properties of the derivative of F needed to guarantee that the system shifts to a new setpoint (e.g., the eigenvalues of the Jacobian matrix all need to have magnitude less than one). The idea of using static local information to inform temporal behavior is called <em>Lyapunov’s direct method</em> or Lyapunov’s <em>first</em> method. We transform the problem of general nonlinear control into one of local linear algebra. The linear algebra tells us interesting and surprising things that are generally actionable in engineering design. We just have to be careful to know the limits of these analyses.</p> <p>One such interesting conclusion is that <em>gradient descent</em> is effectively necessary to maintain setpoints. Following the linear algebra, we can always rewrite the state in such a way that one of the components is the running average of the deviation of the reguland from its setpoint. That is, there is always a component of the state equal to the last value minus the current deviation:</p> <pre><code>x[new] = x[old] - set_point_deviation</code></pre> <p>Control theorists call this <em>integral control</em>, and we’ll talk more about it next week. Integral control is an essential tool to maintain setpoints in control design. It turns out that it is in fact a <em>necessary</em> part of any setpoint regulation.<a class="footnote-anchor" href="#footnote-1" id="footnote-anchor-1" target="_self">1</a></p> <p>While Lyapunov’s first method provides useful insights into the local behavior of complex nonlinear dynamics, using these local linearizations in practice relies heavily on the precise specification of the model. Incorporating model uncertainty in these analyses is not straightforward.<a class="footnote-anchor" href="#footnote-2" id="footnote-anchor-2" target="_self">2</a> Luckily for us, Lyapunov came up with a second method, an indirect method, that can help us analyze the behavior of less well specified systems. Lyapunov’s second method will be the subject of tomorrow’s post.</p> <p class="button-wrapper"><a class="button primary" href="https://www.argmin.net/subscribe?"><span>Subscribe now</span></a></p> <div class="footnote"> <a class="footnote-number" contenteditable="false" href="#footnote-anchor-1" id="footnote-1" target="_self">1</a><div class="footnote-content"><p>I’ll work out these mathematical details in class, and I’ll post a pdf with this derivation later today. I tried to write this out in substack equations, and it was <em>just</em> a bit more unwieldy than I wanted. One of my goals here is getting these arguments shorter and simpler, but this is still a work in progress.</p></div> </div> <div class="footnote"> <a class="footnote-number" contenteditable="false" href="#footnote-anchor-2" id="footnote-2" target="_self">2</a><div class="footnote-content"> <p>At least not to me! YMMV.</p> <p></p> </div> </div> <p class="authors">By Ben Recht</p>
02.02.2026 15:03 👍 0 🔁 0 💬 0 📌 0
Preview
Partial Synchrony Progress Cheat Sheet <p>A walkthrough of this progress is available on the TheCoordinate podcast. A downloadable PDF is available here. The PBFT line of work A: Comparing PBFT, SBFT, Tendermint, HotStuff, and HotStuff-2: post1, post2 B: Explaining the core ideas behind PBFT: principles, PBFT C: Comparing the protocols Tendermint and Simplex, a lecture...</p>
02.02.2026 14:06 👍 0 🔁 0 💬 0 📌 0
Preview
On Small Pair Decompositions for Point Sets <p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Kevin+Buchin">Kevin Buchin</a>, <a href="https://dblp.uni-trier.de/search?q=Jacobus+Conradi">Jacobus Conradi</a>, <a href="https://dblp.uni-trier.de/search?q=Sariel+Har-Peled">Sariel Har-Peled</a>, <a href="https://dblp.uni-trier.de/search?q=Antonia+Kalb">Antonia Kalb</a>, <a href="https://dblp.uni-trier.de/search?q=Abhiruk+Lahiri">Abhiruk Lahiri</a>, <a href="https://dblp.uni-trier.de/search?q=Lukas+Pl%C3%A4tz">Lukas Plätz</a>, <a href="https://dblp.uni-trier.de/search?q=Carolin+Rehs">Carolin Rehs</a></p>$\newcommand{\Re}{\mathbb{R}}$We study the minWSPD problem of computing the minimum-size well-separated pairs decomposition of a set of points, and show constant approximation algorithms in low-dimensional Euclidean space and doubling metrics. This problem is computationally hard already $\Re^2$, and is also hard to approximate. We also introduce a new pair decomposition, removing the requirement that the diameters of the parts should be small. Surprisingly, we show that in a general metric space, one can compute such a decomposition of size $O( \tfrac{n}{\varepsilon}\log n)$, which is dramatically smaller than the quadratic bound for WSPDs. In $\Re^d$, the bound improves to $O( d \tfrac{n}{\varepsilon}\log \tfrac{1}{\varepsilon } )$.
02.02.2026 01:00 👍 0 🔁 0 💬 0 📌 0
Before the ChatGPT-HW debate there were other ``If students use X to do their HW'' debates <p>Lance and I had a blog-debate about <a href="https://blog.computationalcomplexity.org/2026/01/what-to-do-about-students-using-chatgpt.html">What to do about students using ChatGPT to do their Homework</a>.</p> <p>Some commenters pointed out that we've been here before. I will now list past technologies that looked like they were a problem for student assignments and ponder what happened. </p> <p><i>If students can consult diagrams in their text then they will lose the ability to I DON"T KNOW AND I DON"T CARE . </i>I did a post about this  titled <a href="https://blog.computationalcomplexity.org/2022/05/in-1960s-students-protested-vietnam.html">In the 1960's students protested the Vietnam war!/In 1830 students protested...Math?</a> I suspect that students eventually got to consult their texts. Actually, the entire topic geometry of conic sections,  seems to have gone away.</p> <p><i>If students learn how to read then they will lose the ability to listen to their elders tell stories and also lose the ability to memorize. </i>I've heard this was a concern though I don't really know if it was. In any case people are probably worse at memorizing than they used to be, but the plus of having books and reading far outweighs the negative of less good memories. </p> <p><i>If students use spellcheck they will forget how to spell </i>I think people are sloppier with first drafts than they used to be since they know that spellcheck will catch their spelling errors. And even before ChatGPT there were programs to check grammar as well. My spell check wants me to replace ChatGPT with catgut. This points to the need to use spellcheck carefully which foreshadows having to use ChatGPT carefully. My spellcheck <i>does think</i> that spellcheck is spelled correctly.</p> <p><i>If students have calculators they will forget that 7*8 equals... hmm, I forgot: </i>I think we ask much less questions depending on calculation than we used to.  Do kids in grade school still memorize times-tables? If so, then up to what number?  In my blog post on <a href="https://blog.computationalcomplexity.org/2025/03/numbers-that-look-prime-but-arent.html#comment-form">Numbers That Look Prime But Aren't</a>, I casually mentioned that students learn up to 12x12 but I do not know if that's true. </p> <p>SO- for those of you who have kids in grade school, leave a comment on if they </p> <p>a) Memorize Times Tables.</p> <p>b) Learn an algorithm for multiplication ( O(n^2) or <a href="https://en.wikipedia.org/wiki/Karatsuba_algorithm">O(n^{1.58})</a> or  <a href="https://en.wikipedia.org/wiki/Multiplication_algorithm">O(n log n)) </a>. I used Wikipedia for the pointer to the O(n^{1.58}). The entry describes the algorithm very well. I used Wikipedia for the O(nlog n) algorithm. That entry just says that there is a galactic algorithm (one that needs very large n to be worth using). They did not give the algorithm or a pointer to a paper that has it.) </p> <p>c) Are allowed calculators on exams.</p> <p>d) Some combination of the above or something else. </p> <p><br /></p> <p><i>If students use Encyclopedias they will not be using primary sources. </i>Over time Encyclopedias became seen as primary sources. Also, it would be hard for a fourth grader to access primary sources back then.</p> <p><br /></p> <p><i>If students use Wikipedia they will not be using primary sources. </i>I don't hear this debated anymore but I am not sure how the issue was resolved, or if it was resolved. If someone who has kids in school knows, please leave a comment. </p> <p>Annie and Lance Fortnow had a <a href="https://blog.computationalcomplexity.org/2011/01/why-my-kids-trust-wikipedia.html">blog entr</a>y about the Wikipedia issue. </p> <p>I reviewed a book titled <i>Should you Believe Wikipedia? </i>by Amy Bruckman. The review is <a href="https://www.cs.umd.edu/~gasarch/bookrev/NICK/wikipedia.pdf">here</a>. Spoiler alert: She thinks yes but I am more skeptical.</p> <p>I once needed a list of ER-complete problems and asked an expert if there was a survey. He said that the best source was the <a href="https://en.wikipedia.org/wiki/Existential_theory_of_the_reals">Wikipedia page</a>. For other examples of Wikipedia being the only source  see <a href="https://blog.computationalcomplexity.org/2022/10/is-it-okay-for-paper-or-book-to-say-for.html">this blog post</a>.</p> <p>A similar issue is referring to papers on arXiv that have not been refereed. That might be the topic for a future blog post. </p> <p><i>If the first programming language is in a high level language the students will not learn assembly code and stuff about how computers really work. </i>This has happened. I think students do not know as much about low level code as they used to. Is that a problem? This type of concern happens whenever a  higher level language is available. Students using  ChatGPT  to write code for you is another example of this issue, though it also has other problems. </p> <p><i>If students learn typing to early they will not learn cursive</i>. I am an early example of this---my handwriting was bad (still is) so I eagerly took a typing class in my school in 5th grade (the class was 13 girls whose parents wanted them to be secretaries, and me) and worked really hard at it and began handing in typed book reports.  </p> <p>The only letters I know how to do in cursive are</p> <p> W I L L I A M   G A S A R C H  </p> <p>and only  in that order.  </p> <p>ANYWAY, people have lost the ability to write in cursive, or even write in print neatly.  <a href="https://en.wikipedia.org/wiki/Drew_Gilpin_Faust">Drew Faust</a>, a former history professor at Harvard (she retired in 2018) has pointed out that students have a hard time <i>reading</i> cursive<i> </i>in her article <a href="https://www.theatlantic.com/magazine/archive/2022/10/gen-z-handwriting-teaching-cursive-history/671246/">Gen Z Never Learned to Read Cursive</a>.</p> <p>I ask non-rhetorically, is losing the ability to read or write cursive  a problem? </p> <p>Take aways: </p> <p>1) (From the prior blog on ChatGPT) Grading has been broken for a long time. ChatGPT just makes that more obvious.</p> <p>2) When a  new technology comes along we may have to rethink education. </p> <p><br /></p> <p><br /></p> <p><br /></p> <p><br /></p> <p><br /></p> <p><br /></p> <p><br /></p> <p class="authors">By gasarch</p>
01.02.2026 22:28 👍 0 🔁 0 💬 0 📌 0
TR26-011 | Complete Characterization of Randomness Extraction from DAG-Correlated Sources | Saswata Mukherjee, Divesh Aggarwal, Zihan Li, Maciej Obremski, João Ribeiro <p>We introduce the SHEDAG (Somewhere Honest Entropic sources over Directed Acyclic Graphs) source model, a general model for multi-block randomness sources with causal correlations. A SHEDAG source is defined over a directed acyclic graph (DAG) $G$ whose nodes output $n$-bit blocks. Blocks output by honest nodes are independent (by default uniformly random, more generally having high min-entropy), while blocks output by corrupted nodes are arbitrary functions of their causal views (all predecessors in $G$). We tightly characterize the conditions under which randomness extraction from SHEDAG sources is possible. $\textbf{Zero-error extraction:}$ We show that perfect extraction from SHEDAG sources with $t$ corruptions is possible if and only if $G$ contains an "unrelated set'' (an antichain under reachability) of size at least $t+1$. Conversely, if every unrelated set has size at most $t$, we show that no function can output a perfectly uniform bit. We also provide a polynomial-time algorithm to find a maximum unrelated set, thus efficiently identifying the largest corruption threshold $t$ allowing perfect extraction. $\textbf{Negligible-error extraction:}$ We identify a quantity that we call "resilience'' of a DAG $G$, denoted $\text{res}(G)$, that characterizes the possibility of randomness extraction with "negligible" error (in the block length). We show that negligible-error extraction is impossible whenever $t&gt;\text{res}(G)$, and, to complement this, for every $t\leq \text{res}(G)$ we construct explicit extractors with polynomial output length and negligible error. Our results generalize prior online source models studied by (Aggarwal, Obremski, Ribeiro, Siniscalchi, Visconti, Eurocrypt 2020) and (Chattopadhyay, Gurumukhani, Ringach, FOCS 2024), which correspond to the special case of a SHEDAG source whose DAG $G$ is a path.</p>
01.02.2026 07:57 👍 0 🔁 0 💬 0 📌 0
TR26-010 | Nearly Tight Bounds on the Block Number of Boolean Functions in Terms of Sensitivity | Anna Gal, Sourav Chakraborty <p>This paper explores the previously studied measure called block number of Boolean functions, that counts the maximum possible number of minimal sensitive blocks for any input. We present close to tight upper bounds on the block number in terms of the function’s sensitivity and the allowed block size, improving previous bounds by a quadratic factor. Moreover, our bound on the block number yields sharper upper bounds on DNF size and decision tree size. We obtain these results by introducing and estimating a novel measure called brick number, which not only upper bounds the block number but also leads to a new characterization of block sensitivity.</p>
01.02.2026 07:54 👍 0 🔁 0 💬 0 📌 0
Preview
The Fighting Temeraire (Re)visited <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody> <tr><td style="text-align: center;"><span style="margin-left: auto; margin-right: auto;"><a href="https://www.nationalgallery.org.uk/paintings/joseph-mallord-william-turner-the-fighting-temeraire"><img border="0" height="297" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsJLrXqVCzOq4lJz5SzAtbAem51NUoP7z1vleh8t9A-qQf4TJBrvmJUSi6vD_lZZ4yLzwBFBkIHoSmUXbO0anJzUKMC5jNTksOqLihW8zVQH3xAU5891-Ea1ClD9qBOYCGaKuJMcVGj9Mdd_MCL7rON9AEaOefNmQUTTwNiIHOzbT6ss9G6QmW/w400-h297/Fighting%20Temeraire.jpg" width="400" /></a></span></td></tr> <tr><td class="tr-caption" style="text-align: center;"><a href="https://www.nationalgallery.org.uk/paintings/joseph-mallord-william-turner-the-fighting-temeraire">The Fighting Temeraire by JWM Turner</a></td></tr> </tbody></table> <br /><div> <a href="https://blog.computationalcomplexity.org/2025/01/the-fighting-temeraire.html">A year ago</a> I wrote about an experiment I ran to learn about the modern period of art from ChatGPT. Chatty picked four paintings to discuss and I wrote about Joseph Mallord William Turner's <a href="https://www.nationalgallery.org.uk/paintings/joseph-mallord-william-turner-the-fighting-temeraire">The Fighting Temeraire</a>. To review, the Temeraire fought in the Battle of Trafalgar but in this painting it's being towed by a steamboat up the Thames to be broken down for parts. I liked the painting because it captured the change in technology from the great sailing ships to boats moving without sails. How technology can move us from the beautiful to the practical with parallels to what we see today.</div> <div><br /></div> <div>I wrote the post based on high-resolution images but there is nothing like seeing a painting in person. So during a trip into London, we made a pilgrimage to <a href="https://www.nationalgallery.org.uk/visiting/floorplans/level-2/room-40">Room 40 of the National Gallery</a> to see the Temeraire up close. The National Gallery is across the street from Trafalgar Square and a short walk from the Thames the Temeraire traveled in its last trip.</div> <div><br /></div> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVC1EkOI373QlDgq3VHNBXuPzyXb8e4X-pHtLEYcPY3ZFYRhVL1on9mVSe9NrOQUyqtbQcqi7MwAw_L4lmiObbvZejnBHzotO3kqhs2G8qzrU6SdRhqwEE6mp2TvmcYJjdTjrwFeM-KYO11r_KFUqgP6WEZRMNi1clbO6kbx2W65TpjRzEPRMV/s4000/PXL_20260115_124457913.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVC1EkOI373QlDgq3VHNBXuPzyXb8e4X-pHtLEYcPY3ZFYRhVL1on9mVSe9NrOQUyqtbQcqi7MwAw_L4lmiObbvZejnBHzotO3kqhs2G8qzrU6SdRhqwEE6mp2TvmcYJjdTjrwFeM-KYO11r_KFUqgP6WEZRMNi1clbO6kbx2W65TpjRzEPRMV/w400-h300/PXL_20260115_124457913.jpg" width="400" /></a></div> <br /><div><br /></div> <div>I had a new experience with the painting. I could see the brush strokes, and details I missed before, like the people on the steamboat and how its wheels pushed it along the water. More generally, the emotional experience of seeing this last trip of a great ship. A reminder that no matter how digital our world gets, seeing art in its original form brings the artist's true intentions to mind.</div> <div><br /></div> <div>In the same room hang another JMW Turner masterpiece, <a href="https://www.nationalgallery.org.uk/paintings/joseph-mallord-william-turner-rain-steam-and-speed-the-great-western-railway">Rain, Steam, and Speed - The Great Western Railway</a>.</div> <div><br /></div> <div><br /></div> <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody> <tr><td style="text-align: center;"><span style="margin-left: auto; margin-right: auto;"><a href="https://www.nationalgallery.org.uk/paintings/joseph-mallord-william-turner-rain-steam-and-speed-the-great-western-railway"><img border="0" height="299" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnvJWJyAKpd08nXG_RQoGt7KnxtlTLkjjWpTP0EVLs7e4dwXEWozrQ33fkCloD9vucQp2NEyJrpmeL0jnc-v7dRtYjqeQs5FwZU_orNpDwP_V8CnXDyo0mjIFW1sGM5glp_DdRrGqdqar_lk2HA0zYFLFq2nGKP7KYYjUEhQmDILxKNyheT1wl/w400-h299/Great%20Western%20Railway.jpg" width="400" /></a></span></td></tr> <tr><td class="tr-caption" style="text-align: center;"> <a href="https://www.nationalgallery.org.uk/paintings/joseph-mallord-william-turner-rain-steam-and-speed-the-great-western-railway">Rain, Steam, and Speed - The Great Western Railway</a> by Turner</td></tr> </tbody></table> <br /><div>Turner painted The Great Western Railway in 1844 less than a decade after the train line started running. Like the Temeraire captures the change in technology, this big fast train moving quickly towards the viewer in a quiet countryside. On the right side of the painting, a bit hard to see even in person, is a man with a horse-drawn plough, and a small boat on the river on the left.</div> <div><br /></div> <div>Coincidentally I took the Great Western Railway into London that day, but it's not the same railroad company.</div> <div><br /></div> <div>Turner captured a new time in history, where man could travel faster than by horse on land and by wind on the sea, in the early days of the industrial revolution, a reminder of the technological changes we see today. But also the importance of the museum, of seeing something in person that no digital experience can replicate and a location where you can focus on art, undistracted by anything else, except other art.</div> <div><br /></div> <div>Two hundred years from now will someone go to a museum to see art that captures the early days of the AI revolution? And will it be generated by a human?</div> <p class="authors">By Lance Fortnow</p>
28.01.2026 07:35 👍 0 🔁 0 💬 0 📌 0
Preview
Sampling Sphere Packings with Continuum Glauber Dynamics <p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Aiya+Kuchukova">Aiya Kuchukova</a>, <a href="https://dblp.uni-trier.de/search?q=Santosh+Vempala">Santosh Vempala</a>, <a href="https://dblp.uni-trier.de/search?q=Daniel+J.+Zhang">Daniel J. Zhang</a></p>We establish a spectral gap for Continuum Glauber dynamics on the hard sphere model assuming strong spatial mixing, thereby extending the range of parameters in which Continuum Glauber is provably rapidly mixing. To do this, we introduce continuous extensions of spectral independence and negative fields localization. Our techniques apply to general Gibbs point processes with finite-range repulsive pair potentials. As a corollary, we improve the threshold up to which packings of a fixed number of spheres can be sampled from a bounded domain.
27.01.2026 01:00 👍 0 🔁 0 💬 0 📌 0
Preview
What is a POLYNOMIAL-TIME Computable L2-Function? <p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Aras+Bacho">Aras Bacho</a>, <a href="https://dblp.uni-trier.de/search?q=Svetlana+Selivanova">Svetlana Selivanova</a>, <a href="https://dblp.uni-trier.de/search?q=Martin+Ziegler">Martin Ziegler</a></p>We give two natural definitions of polynomial-time computability for L2 functions; and we show them incomparable (unless complexity class FP_1 includes #P_1).
26.01.2026 01:00 👍 0 🔁 0 💬 0 📌 0
Online Talks on Accessible Theorems! Bogdan Grechuk has written a book Landscapes of 21st Century Mathematics that came out in 2021. There is a revised version coming out soon. The theme is that he takes theorems _whose statements can be understood_ and describes them in 5-10 pages. No proofs, but lots of context and, most importantly, if you read all 800 pages of it you know ABOUT lots of math and where to look it up. He emailed me about a seminar series with talks about various chapters. The email is below. The talks sounds like a great way to learn about some math that is out there! To complement the book with great accessible theorems, I am also organizing a series of online seminars with accessible talks about great recent theorems. The seminar has an impressive list of speakers featuring world-renowned mathematicians: January 28, 2026. Prof. Boaz Klartag. "Sphere packing" ** _join the talk here_**. February 11, 2026. Prof. Avi Wigderson. "Expander graphs" ** _join the talk here_** March 4, 2026. Prof. Assaf Naor "Sparsest cut" April 2026. Prof. Harald Helfgott (date and topic to be decided) May 20, 2026. Prof. Ben Green "The polynomial Freiman-Ruzsa conjecture" More details can be found on seminar webpage **https://th21.le.ac.uk/** and my blog **https://functor.network/user/3333/entries **devoted to great accessible theorems across all of mathematics. By gasarch
24.01.2026 14:11 👍 0 🔁 0 💬 0 📌 0
Preview
Concurrent 2-round and 3-round BFT protocols under granular synchrony Consensus protocols for $n=3f+1$ can tolerate $f$ Byzantine faults under partial synchrony. However, they also require a latency of 3 rounds in the good-case when the leader is non-faulty, and the system is synchronous. Can we get a protocol with better latency, or tolerate more faults, if we assume $n=3f+2p+1$?...
24.01.2026 06:00 👍 0 🔁 0 💬 0 📌 0