ArXiv Paperboy (Econ.GN + Econ.TH)'s Avatar

ArXiv Paperboy (Econ.GN + Econ.TH)

@arxivecongnbot

posts updates from arXiv rss feeds for methodology papers in Economic Theory and General Economics. maintainer: @paulgp.com source code: https://github.com/paulgp/bsky_paperbot [forked from Apoorva Lal]

131
Followers
1
Following
1,503
Posts
30.11.2025
Joined
Posts Following

Latest posts by ArXiv Paperboy (Econ.GN + Econ.TH) @arxivecongnbot

This paper develops new identification results for multidimensional continuous measurement-error models where all observed measurements are contaminated by potentially correlated errors and none provides an injective mapping of the latent distribution. Using third order cross moments, the paper constructs a three way tensor whose unique decomposition, guaranteed by Kruskal theorem, identifies the factor loading matrices. Starting with a linear structure, the paper recovers the full distribution of latent factors by constructing suitable measurements and applying scalar or multivariate versions of Kotlarski identity. As a result, the joint distribution of the latent vector and measurement errors is fully identified without requiring injective measurements, showing that multivariate latent structure can be recovered in broader settings than previously believed. Under injectivity, the paper also provides user-friendly testable conditions for identification. Finally, this paper provides general identification results for nonlinear models using a newly-defined generalized Kruskal rank - signal rank - of intergral operators. These results have wide applicability in empirical work involving noisy or indirect measurements, including factor models, survey data with reporting errors, mismeasured regressors in econometrics, and multidimensional latent-trait models in psychology and marketing, potentially enabling more robust estimation and interpretation when clean measurements are unavailable.

This paper develops new identification results for multidimensional continuous measurement-error models where all observed measurements are contaminated by potentially correlated errors and none provides an injective mapping of the latent distribution. Using third order cross moments, the paper constructs a three way tensor whose unique decomposition, guaranteed by Kruskal theorem, identifies the factor loading matrices. Starting with a linear structure, the paper recovers the full distribution of latent factors by constructing suitable measurements and applying scalar or multivariate versions of Kotlarski identity. As a result, the joint distribution of the latent vector and measurement errors is fully identified without requiring injective measurements, showing that multivariate latent structure can be recovered in broader settings than previously believed. Under injectivity, the paper also provides user-friendly testable conditions for identification. Finally, this paper provides general identification results for nonlinear models using a newly-defined generalized Kruskal rank - signal rank - of intergral operators. These results have wide applicability in empirical work involving noisy or indirect measurements, including factor models, survey data with reporting errors, mismeasured regressors in econometrics, and multidimensional latent-trait models in psychology and marketing, potentially enabling more robust estimation and interpretation when clean measurements are unavailable.

arXivπŸ“ˆπŸ€–
Identification of Multivariate Measurement Error Models
By Hu

13.03.2026 03:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Australian house prices have risen strongly since the mid-1990s, but growth has been highly uneven across regions. Raw growth figures obscure whether these differences reflect persistent structural trends or cyclical fluctuations. We address this by estimating a three-factor model in levels for regional repeat-sales log price indexes over 1995-2024. The model decomposes each regional index into a national Market factor, two stationary spreads (Mining and Lifestyle) that capture mean-reverting geographic cycles, and a city-specific residual. The Mining spread, proxied by a Perth-Sydney index differential, reflects resource-driven oscillations in relative performance; the Lifestyle spread captures amenity-driven coastal and regional cycles. The Market loading isolates each region's fundamental sensitivity, beta, to national growth, so that a city's growth under an assumed national change is calculated from its beta once mean-reverting spreads are netted out. Comparing realised paths to these factor-implied trajectories indicates when a city is historically elevated or depressed, and attributes the gap to Mining or Lifestyle spreads.
  Expanding-window ARIMAX estimation reveals that Market betas are stable across major shocks (the mining boom, the Global Financial Crisis, and COVID-19), while Mining and Lifestyle behave as stationary spreads that widen forecast funnels without overturning the cross-sectional ranking implied by beta. Melbourne amplifies national growth, Sydney tracks the national trend closely, and regional areas dampen it. The framework thus provides a simple, factor-based tool for interpreting regional growth differentials and their persistence.

Australian house prices have risen strongly since the mid-1990s, but growth has been highly uneven across regions. Raw growth figures obscure whether these differences reflect persistent structural trends or cyclical fluctuations. We address this by estimating a three-factor model in levels for regional repeat-sales log price indexes over 1995-2024. The model decomposes each regional index into a national Market factor, two stationary spreads (Mining and Lifestyle) that capture mean-reverting geographic cycles, and a city-specific residual. The Mining spread, proxied by a Perth-Sydney index differential, reflects resource-driven oscillations in relative performance; the Lifestyle spread captures amenity-driven coastal and regional cycles. The Market loading isolates each region's fundamental sensitivity, beta, to national growth, so that a city's growth under an assumed national change is calculated from its beta once mean-reverting spreads are netted out. Comparing realised paths to these factor-implied trajectories indicates when a city is historically elevated or depressed, and attributes the gap to Mining or Lifestyle spreads. Expanding-window ARIMAX estimation reveals that Market betas are stable across major shocks (the mining boom, the Global Financial Crisis, and COVID-19), while Mining and Lifestyle behave as stationary spreads that widen forecast funnels without overturning the cross-sectional ranking implied by beta. Melbourne amplifies national growth, Sydney tracks the national trend closely, and regional areas dampen it. The framework thus provides a simple, factor-based tool for interpreting regional growth differentials and their persistence.

arXivπŸ“ˆπŸ€–
Market Sensitivities and Growth Differentials Across Australian Housing Markets
By Sijp

13.03.2026 01:36 πŸ‘ 0 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
The availability of relational data can offer new insights into the functioning of the economy. Nevertheless, modeling the dynamics in network data with multiple types of relationships is still a challenging issue. Stochastic block models provide a parsimonious and flexible approach to network analysis. We propose a new stochastic block model for multidimensional networks, where layer-specific hidden Markov-chain processes drive the changes in community formation. The changes in the block membership of a node in a given layer may be influenced by its own past membership in other layers. This allows for clustering overlap, clustering decoupling, or more complex relationships between layers, including settings of unidirectional, or bidirectional, non-linear Granger block causality. We address the overparameterization issue of a saturated specification by assuming a Multi-Laplacian prior distribution within a Bayesian framework. Data augmentation and Gibbs sampling are used to make the inference problem more tractable. Through simulations, we show that standard linear models and the pairwise approach are unable to detect block causality in most scenarios. In contrast, our model can recover the true Granger causality structure. As an application to international trade, we show that our model offers a unified framework, encompassing community detection and Gravity equation modeling. We found new evidence of block Granger causality of trade agreements and flows and core-periphery structure in both layers on a large sample of countries.

The availability of relational data can offer new insights into the functioning of the economy. Nevertheless, modeling the dynamics in network data with multiple types of relationships is still a challenging issue. Stochastic block models provide a parsimonious and flexible approach to network analysis. We propose a new stochastic block model for multidimensional networks, where layer-specific hidden Markov-chain processes drive the changes in community formation. The changes in the block membership of a node in a given layer may be influenced by its own past membership in other layers. This allows for clustering overlap, clustering decoupling, or more complex relationships between layers, including settings of unidirectional, or bidirectional, non-linear Granger block causality. We address the overparameterization issue of a saturated specification by assuming a Multi-Laplacian prior distribution within a Bayesian framework. Data augmentation and Gibbs sampling are used to make the inference problem more tractable. Through simulations, we show that standard linear models and the pairwise approach are unable to detect block causality in most scenarios. In contrast, our model can recover the true Granger causality structure. As an application to international trade, we show that our model offers a unified framework, encompassing community detection and Gravity equation modeling. We found new evidence of block Granger causality of trade agreements and flows and core-periphery structure in both layers on a large sample of countries.

arXivπŸ“ˆπŸ€–
A Dynamic Stochastic Block Model for Multidimensional Networks
By L\'opez, Casarin

12.03.2026 22:07 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
arXiv:2210.17063v2 Announce Type: replace 
Abstract: This study examines the problem of determining whether to treat individuals based on observed covariates. The most common decision rule is the conditional empirical success (CES) rule proposed by Manski (2004), which assigns individuals to treatments that yield the best experimental outcomes conditional on the observed covariates. Conversely, using shrinkage estimators, which shrink unbiased but noisy preliminary estimates toward the average of these estimates, is a common approach in statistical estimation problems because it is well-known that shrinkage estimators have smaller mean squared errors than unshrunk estimators. Inspired by this idea, we propose a computationally tractable shrinkage rule that selects the shrinkage factor by minimizing the upper bound of the maximum regret. Then, we compare the maximum regret of the proposed shrinkage rule with that of CES and pooling rules when the parameter space is correctly specified or misspecified. Our theoretical results demonstrate that the shrinkage rule performs well in many cases and these findings are further supported by numerical experiments. Specifically, we show that the maximum regret of the shrinkage rule can be strictly smaller than that of the CES and pooling rules in certain cases when the parameter space is correctly specified. In addition, we find that the shrinkage rule is robust against misspecifications of the parameter space. Finally, we apply our method to experimental data from the National Job Training Partnership Act Study.

arXiv:2210.17063v2 Announce Type: replace Abstract: This study examines the problem of determining whether to treat individuals based on observed covariates. The most common decision rule is the conditional empirical success (CES) rule proposed by Manski (2004), which assigns individuals to treatments that yield the best experimental outcomes conditional on the observed covariates. Conversely, using shrinkage estimators, which shrink unbiased but noisy preliminary estimates toward the average of these estimates, is a common approach in statistical estimation problems because it is well-known that shrinkage estimators have smaller mean squared errors than unshrunk estimators. Inspired by this idea, we propose a computationally tractable shrinkage rule that selects the shrinkage factor by minimizing the upper bound of the maximum regret. Then, we compare the maximum regret of the proposed shrinkage rule with that of CES and pooling rules when the parameter space is correctly specified or misspecified. Our theoretical results demonstrate that the shrinkage rule performs well in many cases and these findings are further supported by numerical experiments. Specifically, we show that the maximum regret of the shrinkage rule can be strictly smaller than that of the CES and pooling rules in certain cases when the parameter space is correctly specified. In addition, we find that the shrinkage rule is robust against misspecifications of the parameter space. Finally, we apply our method to experimental data from the National Job Training Partnership Act Study.

arXivπŸ“ˆπŸ€–
Shrinkage Methods for Treatment Choice
By

12.03.2026 19:19 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
This paper proposes a change in perspective on the ``transformation of values'' problem: from ``searching for a single constant solution'' to ``characterizing the allocation space under objective constraints imposed by the physical production network.'' Building an input--output model, we show mathematically that whenever the macroeconomy features a physical surplus, the set of skilled-to-simple labor reduction vectors that can sustain the subsistence floor of the entire labor force forms a bounded value-feasible set . Within this multidimensional region, the classical ``two great macro equalities'' necessarily hold simultaneously for a reasonable range of profit rates. Hence, without violating the physical minimum conditions for reproduction, the law of value and the nominal price system can be made logically consistent.

This paper proposes a change in perspective on the ``transformation of values'' problem: from ``searching for a single constant solution'' to ``characterizing the allocation space under objective constraints imposed by the physical production network.'' Building an input--output model, we show mathematically that whenever the macroeconomy features a physical surplus, the set of skilled-to-simple labor reduction vectors that can sustain the subsistence floor of the entire labor force forms a bounded value-feasible set . Within this multidimensional region, the classical ``two great macro equalities'' necessarily hold simultaneously for a reasonable range of profit rates. Hence, without violating the physical minimum conditions for reproduction, the law of value and the nominal price system can be made logically consistent.

arXivπŸ“ˆπŸ€–
Feasible Sets and the Transformation of Values
By Lyu

12.03.2026 16:45 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
The code that was used in Erdil & Ergin (2008, AER) to compute stable improvement cycles sometimes generated unstable matchings. I identify the minor bug in their code that caused this issue, and I present a corrected implementation. While the general insights from the computational experiments obtained by Erdil & Ergin (2008) persist, the true fraction of improving students is slightly smaller than reported, while their average improvement in rank is larger than reported. All theoretical findings in Erdil & Ergin (2008) are unaffected.

The code that was used in Erdil & Ergin (2008, AER) to compute stable improvement cycles sometimes generated unstable matchings. I identify the minor bug in their code that caused this issue, and I present a corrected implementation. While the general insights from the computational experiments obtained by Erdil & Ergin (2008) persist, the true fraction of improving students is slightly smaller than reported, while their average improvement in rank is larger than reported. All theoretical findings in Erdil & Ergin (2008) are unaffected.

arXivπŸ“ˆπŸ€–
Comment on 'What's the Matter with Tie-Breaking: Improving Efficiency in School Choice'
By Demeulemeester

12.03.2026 16:43 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
We consider a sender-receiver game in which the receiver's action is binary and the sender's preferences are state-independent. The state is multidimensional. The receiver can select one dimension of the state to check (i.e., observe) before choosing his action. We identify a class of influential equilibria in which the sender's message reveals which components of the state are highest, and the receiver selects one of these components to check. The sender can benefit from communication if and only if she prefers one of these equilibria to the no-communication outcome. Similar equilibria exist when the receiver can check multiple dimensions.

We consider a sender-receiver game in which the receiver's action is binary and the sender's preferences are state-independent. The state is multidimensional. The receiver can select one dimension of the state to check (i.e., observe) before choosing his action. We identify a class of influential equilibria in which the sender's message reveals which components of the state are highest, and the receiver selects one of these components to check. The sender can benefit from communication if and only if she prefers one of these equilibria to the no-communication outcome. Similar equilibria exist when the receiver can check multiple dimensions.

arXivπŸ“ˆπŸ€–
Checking Cheap Talk
By Ball, Gao

12.03.2026 16:40 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
We introduce an information order on experiments based on weighted garbling, a generalization of the standard notion of garbling. In this order, an experiment is more informative than another if the latter is a weighted garbling of the former. We show that this is equivalent to ordinary garbling conditional on a payoff-irrelevant event. We also characterize the order in terms of induced posterior belief distributions, showing that it depends only on their support. Our main results provide two decision-theoretic characterizations of this order. First, in static decision problems, one experiment dominates another if and only if its value of information is at least a fixed fraction of the other's across all problems. Second, in a class of stopping time problems with a hidden Markov process and repeated experimentation, one experiment dominates another if and only if it yields weakly higher expected payoffs for every problem with a regular prior.

We introduce an information order on experiments based on weighted garbling, a generalization of the standard notion of garbling. In this order, an experiment is more informative than another if the latter is a weighted garbling of the former. We show that this is equivalent to ordinary garbling conditional on a payoff-irrelevant event. We also characterize the order in terms of induced posterior belief distributions, showing that it depends only on their support. Our main results provide two decision-theoretic characterizations of this order. First, in static decision problems, one experiment dominates another if and only if its value of information is at least a fixed fraction of the other's across all problems. Second, in a class of stopping time problems with a hidden Markov process and repeated experimentation, one experiment dominates another if and only if it yields weakly higher expected payoffs for every problem with a regular prior.

arXivπŸ“ˆπŸ€–
Weighted Garbling
By Kim, Obara

12.03.2026 16:39 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
We study the design of an auction for an income-generating asset such as an intellectual property license. Each bidder has a signal about his future income from acquiring the asset. After the asset is allocated, the winner's income from the asset is realized privately. The principal can audit the winner, at a cost, and then charge a payment contingent on the winner's realized income. We solve for an auction that maximizes the principal's revenue, net of auditing costs. The winning bidder is charged linear royalties up to a cap, beyond which there is no auditing. A higher bidder pays more in cash upfront and faces a lower royalty cap.

We study the design of an auction for an income-generating asset such as an intellectual property license. Each bidder has a signal about his future income from acquiring the asset. After the asset is allocated, the winner's income from the asset is realized privately. The principal can audit the winner, at a cost, and then charge a payment contingent on the winner's realized income. We solve for an auction that maximizes the principal's revenue, net of auditing costs. The winning bidder is charged linear royalties up to a cap, beyond which there is no auditing. A higher bidder pays more in cash upfront and faces a lower royalty cap.

arXivπŸ“ˆπŸ€–
Optimal Auction Design with Contingent Payments and Costly Verification
By Ball, Pekkarinen

12.03.2026 16:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Frontier AI Safety Policies concentrate on prevention -- capability evaluations, deployment gates, and usage constraints -- while neglecting institutional capacity to coordinate responses when prevention fails. We argue that this coordination gap is structural: investments in ecosystem robustness yield diffuse benefits but concentrated costs, generating systematic underinvestment. Drawing on risk regimes in nuclear safety, pandemic preparedness, and critical infrastructure, we propose that similar mechanisms -- precommitment, shared protocols, and standing coordination venues -- could be adapted to frontier AI governance. Without such architecture, institutions cannot learn from failures at the pace of relevance.

Frontier AI Safety Policies concentrate on prevention -- capability evaluations, deployment gates, and usage constraints -- while neglecting institutional capacity to coordinate responses when prevention fails. We argue that this coordination gap is structural: investments in ecosystem robustness yield diffuse benefits but concentrated costs, generating systematic underinvestment. Drawing on risk regimes in nuclear safety, pandemic preparedness, and critical infrastructure, we propose that similar mechanisms -- precommitment, shared protocols, and standing coordination venues -- could be adapted to frontier AI governance. Without such architecture, institutions cannot learn from failures at the pace of relevance.

arXivπŸ“ˆπŸ€–
The coordination gap in frontier AI safety policies
By Mengesha

12.03.2026 16:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
We propose a semi-structural DSGE model for the Israeli economy, as a small open economy, which contains a financial friction in the household sector credit market. Such a friction is reflected in a positive relationship between households' leverage ratio and their interest rate (credit spread) on debt, as evident in the Israeli data. Our main purpose is to evaluate the implications of such a friction on the implementation of monetary policy and macroprudential policy. Our two main findings are: First, it is important that the monetary policy will react also to developments in the credit market, such as credit spread widening, to increase effectiveness in achieving its main goals of stabilizing inflation and real activity. Second, macroprudential policy may increase the sensitivity of households' credit spread to their leverage. Thus, this policy can mitigate or even prevent over-borrowing and reduce the risk of a debt deleveraging crisis. Moreover, in a case of demand weakness and debt deleveraging, in addition to accommodative monetary policy, the macroprudential policy may contribute to stimulating demand due to a corresponding reduction in credit spread.

We propose a semi-structural DSGE model for the Israeli economy, as a small open economy, which contains a financial friction in the household sector credit market. Such a friction is reflected in a positive relationship between households' leverage ratio and their interest rate (credit spread) on debt, as evident in the Israeli data. Our main purpose is to evaluate the implications of such a friction on the implementation of monetary policy and macroprudential policy. Our two main findings are: First, it is important that the monetary policy will react also to developments in the credit market, such as credit spread widening, to increase effectiveness in achieving its main goals of stabilizing inflation and real activity. Second, macroprudential policy may increase the sensitivity of households' credit spread to their leverage. Thus, this policy can mitigate or even prevent over-borrowing and reduce the risk of a debt deleveraging crisis. Moreover, in a case of demand weakness and debt deleveraging, in addition to accommodative monetary policy, the macroprudential policy may contribute to stimulating demand due to a corresponding reduction in credit spread.

arXivπŸ“ˆπŸ€–
A Semi-Structural Model with Household Debt for Israel
By Ilek, Cohen

12.03.2026 16:31 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
A designer relies on an experimenter to provide information to a decision maker, but the experimenter has incentives to persuade rather than merely transmit information. Anticipating this motive, the designer can restrict the set of admissible experiments, but cannot prevent the experimenter from garbling any admissible experiment. We model this situation as delegation over experiments. The optimal delegation set can be obtained by comparing maximally informative experiments among those the experimenter has no incentive to garble. When the experimenter's preferences are $S$-shaped, we fully characterize such experiments as double censorship. Relative to the full delegation outcome, upper censorship, double censorship features an intermediate pooling region, inducing a smaller pooling region for the highest states. We show that the designer strictly benefits from imposing a nontrivial delegation set to constrain the experimenter's ability to persuade while retaining valuable information provision.

A designer relies on an experimenter to provide information to a decision maker, but the experimenter has incentives to persuade rather than merely transmit information. Anticipating this motive, the designer can restrict the set of admissible experiments, but cannot prevent the experimenter from garbling any admissible experiment. We model this situation as delegation over experiments. The optimal delegation set can be obtained by comparing maximally informative experiments among those the experimenter has no incentive to garble. When the experimenter's preferences are $S$-shaped, we fully characterize such experiments as double censorship. Relative to the full delegation outcome, upper censorship, double censorship features an intermediate pooling region, inducing a smaller pooling region for the highest states. We show that the designer strictly benefits from imposing a nontrivial delegation set to constrain the experimenter's ability to persuade while retaining valuable information provision.

arXivπŸ“ˆπŸ€–
Delegated Information Provision
By Bilotta, Carnehl, Preusser

12.03.2026 16:27 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
The theory of thermal macroeconomics (TM) analyses economic phenomena within the mathematical framework of classical thermodynamics, using a set of axioms that apply to the purely macroscopic aspects of an economy [CM]. The theory shows that the possible macro-behaviours are governed by an entropy function. In simple idealised cases, the entropy function can be calculated from the rules governing the interactions of individual agents. But where this is not possible, TM predicts that the entropy can nonetheless be measured empirically through an economic analogue of calorimetry in physics. We show using computer simulations the in-principle feasibility of this approach: an entropy function can successfully be measured for a range of simulated economies that we tested. In cases where entropy can be calculated analytically from microfoundational assumptions, the measured entropy agrees well. In more complex cases, where microfoundational analysis is infeasible, our method of measuring entropy still applies and is validated by demonstrations that entropy is a state function of an economic system, i.e., exhibits path independence. This appears to hold even for some systems to which we don't have a proof that the Axioms of TM apply. Furthermore, in all cases tested, entropy is concave, as predicted by TM. As shown in [CM], once the entropy function is established for a simulated exchange economy, it is possible to derive prices, the value of money and various other quantities, and make predictions about the effects of putting two or more economies in contact.

The theory of thermal macroeconomics (TM) analyses economic phenomena within the mathematical framework of classical thermodynamics, using a set of axioms that apply to the purely macroscopic aspects of an economy [CM]. The theory shows that the possible macro-behaviours are governed by an entropy function. In simple idealised cases, the entropy function can be calculated from the rules governing the interactions of individual agents. But where this is not possible, TM predicts that the entropy can nonetheless be measured empirically through an economic analogue of calorimetry in physics. We show using computer simulations the in-principle feasibility of this approach: an entropy function can successfully be measured for a range of simulated economies that we tested. In cases where entropy can be calculated analytically from microfoundational assumptions, the measured entropy agrees well. In more complex cases, where microfoundational analysis is infeasible, our method of measuring entropy still applies and is validated by demonstrations that entropy is a state function of an economic system, i.e., exhibits path independence. This appears to hold even for some systems to which we don't have a proof that the Axioms of TM apply. Furthermore, in all cases tested, entropy is concave, as predicted by TM. As shown in [CM], once the entropy function is established for a simulated exchange economy, it is possible to derive prices, the value of money and various other quantities, and make predictions about the effects of putting two or more economies in contact.

arXivπŸ“ˆπŸ€–
Towards macroeconomic analysis without microfoundations: measuring the entropy of simulated exchange economies
By Luo, MacKay, Chater

12.03.2026 16:26 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Immediately after the establishment of the New Meiji Government in the 19th century, a system of conscription was adopted. The exemption rule has changed several times. Using individual-level panel data on the academic performance of Keio Gijuku, I found a surge in the family head's student rate between 1884 and 1888, and the rate declined immediately thereafter. After regaining privileges for private school students, family head performance declined, and the difference between head and non-family heads disappeared. This made it evident that conscription increased educational attendance quantitatively, but did not qualitatively improve academic performance.

Immediately after the establishment of the New Meiji Government in the 19th century, a system of conscription was adopted. The exemption rule has changed several times. Using individual-level panel data on the academic performance of Keio Gijuku, I found a surge in the family head's student rate between 1884 and 1888, and the rate declined immediately thereafter. After regaining privileges for private school students, family head performance declined, and the difference between head and non-family heads disappeared. This made it evident that conscription increased educational attendance quantitatively, but did not qualitatively improve academic performance.

arXivπŸ“ˆπŸ€–
Conscription and its exemption in 19th Century Japan: Incentivized family head in educational market
By Yamamura

12.03.2026 03:56 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
This survey (re)introduces reinforcement learning methods to economists. The curse of dimensionality limits how far exact dynamic programming can be effectively applied, forcing us to rely on suitably "small" problems or our ability to convert "big" problems into smaller ones. While this reduction has been sufficient for many classical applications, a growing class of economic models resists such reduction. Reinforcement learning algorithms offer a natural, sample-based extension of dynamic programming, extending tractability to problems with high-dimensional states, continuous actions, and strategic interactions. I review the theory connecting classical planning to modern learning algorithms and demonstrate their mechanics through simulated examples in pricing, inventory control, strategic games, and preference elicitation. I also examine the practical vulnerabilities of these algorithms, noting their brittleness, sample inefficiency, sensitivity to hyperparameters, and the absence of global convergence guarantees outside of tabular settings. The successes of reinforcement learning remain strictly bounded by these constraints, as well as a reliance on accurate simulators. When guided by economic structure, reinforcement learning provides a remarkably flexible framework. It stands as an imperfect, but promising, addition to the computational economist's toolkit. A companion survey (Rust and Rawat, 2026b) covers the inverse problem of inferring preferences from observed behavior.

This survey (re)introduces reinforcement learning methods to economists. The curse of dimensionality limits how far exact dynamic programming can be effectively applied, forcing us to rely on suitably "small" problems or our ability to convert "big" problems into smaller ones. While this reduction has been sufficient for many classical applications, a growing class of economic models resists such reduction. Reinforcement learning algorithms offer a natural, sample-based extension of dynamic programming, extending tractability to problems with high-dimensional states, continuous actions, and strategic interactions. I review the theory connecting classical planning to modern learning algorithms and demonstrate their mechanics through simulated examples in pricing, inventory control, strategic games, and preference elicitation. I also examine the practical vulnerabilities of these algorithms, noting their brittleness, sample inefficiency, sensitivity to hyperparameters, and the absence of global convergence guarantees outside of tabular settings. The successes of reinforcement learning remain strictly bounded by these constraints, as well as a reliance on accurate simulators. When guided by economic structure, reinforcement learning provides a remarkably flexible framework. It stands as an imperfect, but promising, addition to the computational economist's toolkit. A companion survey (Rust and Rawat, 2026b) covers the inverse problem of inferring preferences from observed behavior.

arXivπŸ“ˆπŸ€–
A Survey of Reinforcement Learning For Economics
By Rawat

12.03.2026 03:52 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
As AI agents increasingly act on behalf of human stakeholders in economic settings, understanding their behavior in complex market environments becomes critical. This article examines how Large Language Models coordinate on markets that are characterized by information asymmetries and in which providers of services have incentives to exploit that asymmetry for their own economic gain. To that end, we conduct simulations with GPT-5.1 agents in credence goods markets, manipulating the institutional framework (free market, verifiability, liability), LLM agent's social preferences (default, self-interested, inequity-averse, efficiency-loving), and reputation mechanisms across one-shot and repeated 16-round interactions. In one-shot settings, LLM agents largely fail to establish cooperation, with markets breaking down except under liability rules or when experts have efficiency-loving preferences. Repeated interactions solve consumer participation through competitive price reduction, but expert fraud remains entrenched absent explicit other-regarding preferences. LLM consumers focus narrowly on price levels rather than understanding strategic incentives embedded in markups, making them vulnerable to exploitation. Compared to human experiments, LLM markets exhibit substantially higher consumer participation but much greater market concentration, lower prices, and more polarized fraud patterns. The effect of institutions like verifiability and reputation is also much more ambiguous. Surplus shifts dramatically toward consumers under social-preference objectives. These findings suggest that institutional design for AI agent markets requires fundamentally different approaches than those effective for human actors, with social preference alignment emerging as the primary determinant of market efficiency.

As AI agents increasingly act on behalf of human stakeholders in economic settings, understanding their behavior in complex market environments becomes critical. This article examines how Large Language Models coordinate on markets that are characterized by information asymmetries and in which providers of services have incentives to exploit that asymmetry for their own economic gain. To that end, we conduct simulations with GPT-5.1 agents in credence goods markets, manipulating the institutional framework (free market, verifiability, liability), LLM agent's social preferences (default, self-interested, inequity-averse, efficiency-loving), and reputation mechanisms across one-shot and repeated 16-round interactions. In one-shot settings, LLM agents largely fail to establish cooperation, with markets breaking down except under liability rules or when experts have efficiency-loving preferences. Repeated interactions solve consumer participation through competitive price reduction, but expert fraud remains entrenched absent explicit other-regarding preferences. LLM consumers focus narrowly on price levels rather than understanding strategic incentives embedded in markups, making them vulnerable to exploitation. Compared to human experiments, LLM markets exhibit substantially higher consumer participation but much greater market concentration, lower prices, and more polarized fraud patterns. The effect of institutions like verifiability and reputation is also much more ambiguous. Surplus shifts dramatically toward consumers under social-preference objectives. These findings suggest that institutional design for AI agent markets requires fundamentally different approaches than those effective for human actors, with social preference alignment emerging as the primary determinant of market efficiency.

arXivπŸ“ˆπŸ€–
LLM-Agent Interactions on Markets with Information Asymmetries
By Erlei, Meub

12.03.2026 03:49 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Immediately after the establishment of the New Meiji Government in the 19th century, a system of conscription was adopted. The exemption rule has changed several times. Using individual-level panel data on the academic performance of Keio Gijuku, I found a surge in the family head's student rate between 1884 and 1888, and the rate declined immediately thereafter. After regaining privileges for private school students, family head performance declined, and the difference between head and non-family heads disappeared. This made it evident that conscription increased educational attendance quantitatively, but did not qualitatively improve academic performance.

Immediately after the establishment of the New Meiji Government in the 19th century, a system of conscription was adopted. The exemption rule has changed several times. Using individual-level panel data on the academic performance of Keio Gijuku, I found a surge in the family head's student rate between 1884 and 1888, and the rate declined immediately thereafter. After regaining privileges for private school students, family head performance declined, and the difference between head and non-family heads disappeared. This made it evident that conscription increased educational attendance quantitatively, but did not qualitatively improve academic performance.

arXivπŸ“ˆπŸ€–
Conscription and its exemption in 19th Century Japan: Incentivized family head in educational market
By Yamamura

11.03.2026 22:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
This survey (re)introduces reinforcement learning methods to economists. The curse of dimensionality limits how far exact dynamic programming can be effectively applied, forcing us to rely on suitably "small" problems or our ability to convert "big" problems into smaller ones. While this reduction has been sufficient for many classical applications, a growing class of economic models resists such reduction. Reinforcement learning algorithms offer a natural, sample-based extension of dynamic programming, extending tractability to problems with high-dimensional states, continuous actions, and strategic interactions. I review the theory connecting classical planning to modern learning algorithms and demonstrate their mechanics through simulated examples in pricing, inventory control, strategic games, and preference elicitation. I also examine the practical vulnerabilities of these algorithms, noting their brittleness, sample inefficiency, sensitivity to hyperparameters, and the absence of global convergence guarantees outside of tabular settings. The successes of reinforcement learning remain strictly bounded by these constraints, as well as a reliance on accurate simulators. When guided by economic structure, reinforcement learning provides a remarkably flexible framework. It stands as an imperfect, but promising, addition to the computational economist's toolkit. A companion survey (Rust and Rawat, 2026b) covers the inverse problem of inferring preferences from observed behavior.

This survey (re)introduces reinforcement learning methods to economists. The curse of dimensionality limits how far exact dynamic programming can be effectively applied, forcing us to rely on suitably "small" problems or our ability to convert "big" problems into smaller ones. While this reduction has been sufficient for many classical applications, a growing class of economic models resists such reduction. Reinforcement learning algorithms offer a natural, sample-based extension of dynamic programming, extending tractability to problems with high-dimensional states, continuous actions, and strategic interactions. I review the theory connecting classical planning to modern learning algorithms and demonstrate their mechanics through simulated examples in pricing, inventory control, strategic games, and preference elicitation. I also examine the practical vulnerabilities of these algorithms, noting their brittleness, sample inefficiency, sensitivity to hyperparameters, and the absence of global convergence guarantees outside of tabular settings. The successes of reinforcement learning remain strictly bounded by these constraints, as well as a reliance on accurate simulators. When guided by economic structure, reinforcement learning provides a remarkably flexible framework. It stands as an imperfect, but promising, addition to the computational economist's toolkit. A companion survey (Rust and Rawat, 2026b) covers the inverse problem of inferring preferences from observed behavior.

arXivπŸ“ˆπŸ€–
A Survey of Reinforcement Learning For Economics
By Rawat

11.03.2026 22:11 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
As AI agents increasingly act on behalf of human stakeholders in economic settings, understanding their behavior in complex market environments becomes critical. This article examines how Large Language Models coordinate on markets that are characterized by information asymmetries and in which providers of services have incentives to exploit that asymmetry for their own economic gain. To that end, we conduct simulations with GPT-5.1 agents in credence goods markets, manipulating the institutional framework (free market, verifiability, liability), LLM agent's social preferences (default, self-interested, inequity-averse, efficiency-loving), and reputation mechanisms across one-shot and repeated 16-round interactions. In one-shot settings, LLM agents largely fail to establish cooperation, with markets breaking down except under liability rules or when experts have efficiency-loving preferences. Repeated interactions solve consumer participation through competitive price reduction, but expert fraud remains entrenched absent explicit other-regarding preferences. LLM consumers focus narrowly on price levels rather than understanding strategic incentives embedded in markups, making them vulnerable to exploitation. Compared to human experiments, LLM markets exhibit substantially higher consumer participation but much greater market concentration, lower prices, and more polarized fraud patterns. The effect of institutions like verifiability and reputation is also much more ambiguous. Surplus shifts dramatically toward consumers under social-preference objectives. These findings suggest that institutional design for AI agent markets requires fundamentally different approaches than those effective for human actors, with social preference alignment emerging as the primary determinant of market efficiency.

As AI agents increasingly act on behalf of human stakeholders in economic settings, understanding their behavior in complex market environments becomes critical. This article examines how Large Language Models coordinate on markets that are characterized by information asymmetries and in which providers of services have incentives to exploit that asymmetry for their own economic gain. To that end, we conduct simulations with GPT-5.1 agents in credence goods markets, manipulating the institutional framework (free market, verifiability, liability), LLM agent's social preferences (default, self-interested, inequity-averse, efficiency-loving), and reputation mechanisms across one-shot and repeated 16-round interactions. In one-shot settings, LLM agents largely fail to establish cooperation, with markets breaking down except under liability rules or when experts have efficiency-loving preferences. Repeated interactions solve consumer participation through competitive price reduction, but expert fraud remains entrenched absent explicit other-regarding preferences. LLM consumers focus narrowly on price levels rather than understanding strategic incentives embedded in markups, making them vulnerable to exploitation. Compared to human experiments, LLM markets exhibit substantially higher consumer participation but much greater market concentration, lower prices, and more polarized fraud patterns. The effect of institutions like verifiability and reputation is also much more ambiguous. Surplus shifts dramatically toward consumers under social-preference objectives. These findings suggest that institutional design for AI agent markets requires fundamentally different approaches than those effective for human actors, with social preference alignment emerging as the primary determinant of market efficiency.

arXivπŸ“ˆπŸ€–
LLM-Agent Interactions on Markets with Information Asymmetries
By Erlei, Meub

11.03.2026 22:08 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Immediately after the establishment of the New Meiji Government in the 19th century, a system of conscription was adopted. The exemption rule has changed several times. Using individual-level panel data on the academic performance of Keio Gijuku, I found a surge in the family head's student rate between 1884 and 1888, and the rate declined immediately thereafter. After regaining privileges for private school students, family head performance declined, and the difference between head and non-family heads disappeared. This made it evident that conscription increased educational attendance quantitatively, but did not qualitatively improve academic performance.

Immediately after the establishment of the New Meiji Government in the 19th century, a system of conscription was adopted. The exemption rule has changed several times. Using individual-level panel data on the academic performance of Keio Gijuku, I found a surge in the family head's student rate between 1884 and 1888, and the rate declined immediately thereafter. After regaining privileges for private school students, family head performance declined, and the difference between head and non-family heads disappeared. This made it evident that conscription increased educational attendance quantitatively, but did not qualitatively improve academic performance.

arXivπŸ“ˆπŸ€–
Conscription and its exemption in 19th Century Japan: Incentivized family head in educational market
By Yamamura

11.03.2026 19:27 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
This survey (re)introduces reinforcement learning methods to economists. The curse of dimensionality limits how far exact dynamic programming can be effectively applied, forcing us to rely on suitably "small" problems or our ability to convert "big" problems into smaller ones. While this reduction has been sufficient for many classical applications, a growing class of economic models resists such reduction. Reinforcement learning algorithms offer a natural, sample-based extension of dynamic programming, extending tractability to problems with high-dimensional states, continuous actions, and strategic interactions. I review the theory connecting classical planning to modern learning algorithms and demonstrate their mechanics through simulated examples in pricing, inventory control, strategic games, and preference elicitation. I also examine the practical vulnerabilities of these algorithms, noting their brittleness, sample inefficiency, sensitivity to hyperparameters, and the absence of global convergence guarantees outside of tabular settings. The successes of reinforcement learning remain strictly bounded by these constraints, as well as a reliance on accurate simulators. When guided by economic structure, reinforcement learning provides a remarkably flexible framework. It stands as an imperfect, but promising, addition to the computational economist's toolkit. A companion survey (Rust and Rawat, 2026b) covers the inverse problem of inferring preferences from observed behavior.

This survey (re)introduces reinforcement learning methods to economists. The curse of dimensionality limits how far exact dynamic programming can be effectively applied, forcing us to rely on suitably "small" problems or our ability to convert "big" problems into smaller ones. While this reduction has been sufficient for many classical applications, a growing class of economic models resists such reduction. Reinforcement learning algorithms offer a natural, sample-based extension of dynamic programming, extending tractability to problems with high-dimensional states, continuous actions, and strategic interactions. I review the theory connecting classical planning to modern learning algorithms and demonstrate their mechanics through simulated examples in pricing, inventory control, strategic games, and preference elicitation. I also examine the practical vulnerabilities of these algorithms, noting their brittleness, sample inefficiency, sensitivity to hyperparameters, and the absence of global convergence guarantees outside of tabular settings. The successes of reinforcement learning remain strictly bounded by these constraints, as well as a reliance on accurate simulators. When guided by economic structure, reinforcement learning provides a remarkably flexible framework. It stands as an imperfect, but promising, addition to the computational economist's toolkit. A companion survey (Rust and Rawat, 2026b) covers the inverse problem of inferring preferences from observed behavior.

arXivπŸ“ˆπŸ€–
A Survey of Reinforcement Learning For Economics
By Rawat

11.03.2026 19:24 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
As AI agents increasingly act on behalf of human stakeholders in economic settings, understanding their behavior in complex market environments becomes critical. This article examines how Large Language Models coordinate on markets that are characterized by information asymmetries and in which providers of services have incentives to exploit that asymmetry for their own economic gain. To that end, we conduct simulations with GPT-5.1 agents in credence goods markets, manipulating the institutional framework (free market, verifiability, liability), LLM agent's social preferences (default, self-interested, inequity-averse, efficiency-loving), and reputation mechanisms across one-shot and repeated 16-round interactions. In one-shot settings, LLM agents largely fail to establish cooperation, with markets breaking down except under liability rules or when experts have efficiency-loving preferences. Repeated interactions solve consumer participation through competitive price reduction, but expert fraud remains entrenched absent explicit other-regarding preferences. LLM consumers focus narrowly on price levels rather than understanding strategic incentives embedded in markups, making them vulnerable to exploitation. Compared to human experiments, LLM markets exhibit substantially higher consumer participation but much greater market concentration, lower prices, and more polarized fraud patterns. The effect of institutions like verifiability and reputation is also much more ambiguous. Surplus shifts dramatically toward consumers under social-preference objectives. These findings suggest that institutional design for AI agent markets requires fundamentally different approaches than those effective for human actors, with social preference alignment emerging as the primary determinant of market efficiency.

As AI agents increasingly act on behalf of human stakeholders in economic settings, understanding their behavior in complex market environments becomes critical. This article examines how Large Language Models coordinate on markets that are characterized by information asymmetries and in which providers of services have incentives to exploit that asymmetry for their own economic gain. To that end, we conduct simulations with GPT-5.1 agents in credence goods markets, manipulating the institutional framework (free market, verifiability, liability), LLM agent's social preferences (default, self-interested, inequity-averse, efficiency-loving), and reputation mechanisms across one-shot and repeated 16-round interactions. In one-shot settings, LLM agents largely fail to establish cooperation, with markets breaking down except under liability rules or when experts have efficiency-loving preferences. Repeated interactions solve consumer participation through competitive price reduction, but expert fraud remains entrenched absent explicit other-regarding preferences. LLM consumers focus narrowly on price levels rather than understanding strategic incentives embedded in markups, making them vulnerable to exploitation. Compared to human experiments, LLM markets exhibit substantially higher consumer participation but much greater market concentration, lower prices, and more polarized fraud patterns. The effect of institutions like verifiability and reputation is also much more ambiguous. Surplus shifts dramatically toward consumers under social-preference objectives. These findings suggest that institutional design for AI agent markets requires fundamentally different approaches than those effective for human actors, with social preference alignment emerging as the primary determinant of market efficiency.

arXivπŸ“ˆπŸ€–
LLM-Agent Interactions on Markets with Information Asymmetries
By Erlei, Meub

11.03.2026 19:19 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Immediately after the establishment of the New Meiji Government in the 19th century, a system of conscription was adopted. The exemption rule has changed several times. Using individual-level panel data on the academic performance of Keio Gijuku, I found a surge in the family head's student rate between 1884 and 1888, and the rate declined immediately thereafter. After regaining privileges for private school students, family head performance declined, and the difference between head and non-family heads disappeared. This made it evident that conscription increased educational attendance quantitatively, but did not qualitatively improve academic performance.

Immediately after the establishment of the New Meiji Government in the 19th century, a system of conscription was adopted. The exemption rule has changed several times. Using individual-level panel data on the academic performance of Keio Gijuku, I found a surge in the family head's student rate between 1884 and 1888, and the rate declined immediately thereafter. After regaining privileges for private school students, family head performance declined, and the difference between head and non-family heads disappeared. This made it evident that conscription increased educational attendance quantitatively, but did not qualitatively improve academic performance.

arXivπŸ“ˆπŸ€–
Conscription and its exemption in 19th Century Japan: Incentivized family head in educational market
By Yamamura

11.03.2026 16:26 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
This survey (re)introduces reinforcement learning methods to economists. The curse of dimensionality limits how far exact dynamic programming can be effectively applied, forcing us to rely on suitably "small" problems or our ability to convert "big" problems into smaller ones. While this reduction has been sufficient for many classical applications, a growing class of economic models resists such reduction. Reinforcement learning algorithms offer a natural, sample-based extension of dynamic programming, extending tractability to problems with high-dimensional states, continuous actions, and strategic interactions. I review the theory connecting classical planning to modern learning algorithms and demonstrate their mechanics through simulated examples in pricing, inventory control, strategic games, and preference elicitation. I also examine the practical vulnerabilities of these algorithms, noting their brittleness, sample inefficiency, sensitivity to hyperparameters, and the absence of global convergence guarantees outside of tabular settings. The successes of reinforcement learning remain strictly bounded by these constraints, as well as a reliance on accurate simulators. When guided by economic structure, reinforcement learning provides a remarkably flexible framework. It stands as an imperfect, but promising, addition to the computational economist's toolkit. A companion survey (Rust and Rawat, 2026b) covers the inverse problem of inferring preferences from observed behavior.

This survey (re)introduces reinforcement learning methods to economists. The curse of dimensionality limits how far exact dynamic programming can be effectively applied, forcing us to rely on suitably "small" problems or our ability to convert "big" problems into smaller ones. While this reduction has been sufficient for many classical applications, a growing class of economic models resists such reduction. Reinforcement learning algorithms offer a natural, sample-based extension of dynamic programming, extending tractability to problems with high-dimensional states, continuous actions, and strategic interactions. I review the theory connecting classical planning to modern learning algorithms and demonstrate their mechanics through simulated examples in pricing, inventory control, strategic games, and preference elicitation. I also examine the practical vulnerabilities of these algorithms, noting their brittleness, sample inefficiency, sensitivity to hyperparameters, and the absence of global convergence guarantees outside of tabular settings. The successes of reinforcement learning remain strictly bounded by these constraints, as well as a reliance on accurate simulators. When guided by economic structure, reinforcement learning provides a remarkably flexible framework. It stands as an imperfect, but promising, addition to the computational economist's toolkit. A companion survey (Rust and Rawat, 2026b) covers the inverse problem of inferring preferences from observed behavior.

arXivπŸ“ˆπŸ€–
A Survey of Reinforcement Learning For Economics
By Rawat

11.03.2026 16:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
As AI agents increasingly act on behalf of human stakeholders in economic settings, understanding their behavior in complex market environments becomes critical. This article examines how Large Language Models coordinate on markets that are characterized by information asymmetries and in which providers of services have incentives to exploit that asymmetry for their own economic gain. To that end, we conduct simulations with GPT-5.1 agents in credence goods markets, manipulating the institutional framework (free market, verifiability, liability), LLM agent's social preferences (default, self-interested, inequity-averse, efficiency-loving), and reputation mechanisms across one-shot and repeated 16-round interactions. In one-shot settings, LLM agents largely fail to establish cooperation, with markets breaking down except under liability rules or when experts have efficiency-loving preferences. Repeated interactions solve consumer participation through competitive price reduction, but expert fraud remains entrenched absent explicit other-regarding preferences. LLM consumers focus narrowly on price levels rather than understanding strategic incentives embedded in markups, making them vulnerable to exploitation. Compared to human experiments, LLM markets exhibit substantially higher consumer participation but much greater market concentration, lower prices, and more polarized fraud patterns. The effect of institutions like verifiability and reputation is also much more ambiguous. Surplus shifts dramatically toward consumers under social-preference objectives. These findings suggest that institutional design for AI agent markets requires fundamentally different approaches than those effective for human actors, with social preference alignment emerging as the primary determinant of market efficiency.

As AI agents increasingly act on behalf of human stakeholders in economic settings, understanding their behavior in complex market environments becomes critical. This article examines how Large Language Models coordinate on markets that are characterized by information asymmetries and in which providers of services have incentives to exploit that asymmetry for their own economic gain. To that end, we conduct simulations with GPT-5.1 agents in credence goods markets, manipulating the institutional framework (free market, verifiability, liability), LLM agent's social preferences (default, self-interested, inequity-averse, efficiency-loving), and reputation mechanisms across one-shot and repeated 16-round interactions. In one-shot settings, LLM agents largely fail to establish cooperation, with markets breaking down except under liability rules or when experts have efficiency-loving preferences. Repeated interactions solve consumer participation through competitive price reduction, but expert fraud remains entrenched absent explicit other-regarding preferences. LLM consumers focus narrowly on price levels rather than understanding strategic incentives embedded in markups, making them vulnerable to exploitation. Compared to human experiments, LLM markets exhibit substantially higher consumer participation but much greater market concentration, lower prices, and more polarized fraud patterns. The effect of institutions like verifiability and reputation is also much more ambiguous. Surplus shifts dramatically toward consumers under social-preference objectives. These findings suggest that institutional design for AI agent markets requires fundamentally different approaches than those effective for human actors, with social preference alignment emerging as the primary determinant of market efficiency.

arXivπŸ“ˆπŸ€–
LLM-Agent Interactions on Markets with Information Asymmetries
By Erlei, Meub

11.03.2026 16:21 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Hundreds of millions of farmers make high-stakes decisions under uncertainty about future weather. Forecasts can inform these decisions, but available choices and their risks and benefits vary between farmers. We introduce a decision-theory framework for designing useful forecasts in settings where the forecaster cannot prescribe optimal actions because farmers' circumstances are heterogeneous. We apply this framework to the case of seasonal onset of monsoon rains, a key date for planting decisions and agricultural investments in many tropical countries. We develop a system for tailoring forecasts to the requirements of this framework by blending systematically benchmarked artificial intelligence (AI) weather prediction models with a new "evolving farmer expectations" statistical model. This statistical model applies Bayesian inference to historical observations to predict time-varying probabilities of first-occurrence events throughout a season. The blended system yields more skillful Indian monsoon forecasts at longer lead times than its components or any multi-model average. In 2025, this system was deployed operationally in a government-led program that delivered subseasonal monsoon onset forecasts to 38 million Indian farmers, skillfully predicting that year's early-summer anomalous dry period. This decision-theory framework and blending system offer a pathway for developing climate adaptation tools for large vulnerable populations around the world.

Hundreds of millions of farmers make high-stakes decisions under uncertainty about future weather. Forecasts can inform these decisions, but available choices and their risks and benefits vary between farmers. We introduce a decision-theory framework for designing useful forecasts in settings where the forecaster cannot prescribe optimal actions because farmers' circumstances are heterogeneous. We apply this framework to the case of seasonal onset of monsoon rains, a key date for planting decisions and agricultural investments in many tropical countries. We develop a system for tailoring forecasts to the requirements of this framework by blending systematically benchmarked artificial intelligence (AI) weather prediction models with a new "evolving farmer expectations" statistical model. This statistical model applies Bayesian inference to historical observations to predict time-varying probabilities of first-occurrence events throughout a season. The blended system yields more skillful Indian monsoon forecasts at longer lead times than its components or any multi-model average. In 2025, this system was deployed operationally in a government-led program that delivered subseasonal monsoon onset forecasts to 38 million Indian farmers, skillfully predicting that year's early-summer anomalous dry period. This decision-theory framework and blending system offer a pathway for developing climate adaptation tools for large vulnerable populations around the world.

arXivπŸ“ˆπŸ€–
Designing probabilistic AI monsoon forecasts to inform agricultural decision-making
By Aitken, Masiwal, Marchakitus et al

11.03.2026 04:09 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Large language models (LLMs) have enabled agent-based systems that aim to automate scientific research workflows. Most existing approaches focus on fully autonomous discovery, where AI systems generate research ideas, conduct analyses, and produce manuscripts with minimal human involvement. However, empirical research in economics and the social sciences poses additional constraints: research questions must be grounded in available datasets, identification strategies require careful design, and human judgment remains essential for evaluating economic significance. We introduce HLER (Human-in-the-Loop Economic Research), a multi-agent architecture that supports empirical research automation while preserving critical human oversight. The system orchestrates specialized agents for data auditing, data profiling, hypothesis generation, econometric analysis, manuscript drafting, and automated review. A key design principle is dataset-aware hypothesis generation, where candidate research questions are constrained by dataset structure, variable availability, and distributional diagnostics, reducing infeasible or hallucinated hypotheses. HLER further implements a two-loop architecture: a question quality loop that screens and selects feasible hypotheses, and a research revision loop where automated review triggers re-analysis and manuscript revision. Human decision gates are embedded at key stages, allowing researchers to guide the automated pipeline. Experiments on three empirical datasets show that dataset-aware hypothesis generation produces feasible research questions in 87% of cases (versus 41% under unconstrained generation), while complete empirical manuscripts can be produced at an average API cost of $0.8-$1.5 per run. These results suggest that Human-AI collaborative pipelines may provide a practical path toward scalable empirical research.

Large language models (LLMs) have enabled agent-based systems that aim to automate scientific research workflows. Most existing approaches focus on fully autonomous discovery, where AI systems generate research ideas, conduct analyses, and produce manuscripts with minimal human involvement. However, empirical research in economics and the social sciences poses additional constraints: research questions must be grounded in available datasets, identification strategies require careful design, and human judgment remains essential for evaluating economic significance. We introduce HLER (Human-in-the-Loop Economic Research), a multi-agent architecture that supports empirical research automation while preserving critical human oversight. The system orchestrates specialized agents for data auditing, data profiling, hypothesis generation, econometric analysis, manuscript drafting, and automated review. A key design principle is dataset-aware hypothesis generation, where candidate research questions are constrained by dataset structure, variable availability, and distributional diagnostics, reducing infeasible or hallucinated hypotheses. HLER further implements a two-loop architecture: a question quality loop that screens and selects feasible hypotheses, and a research revision loop where automated review triggers re-analysis and manuscript revision. Human decision gates are embedded at key stages, allowing researchers to guide the automated pipeline. Experiments on three empirical datasets show that dataset-aware hypothesis generation produces feasible research questions in 87% of cases (versus 41% under unconstrained generation), while complete empirical manuscripts can be produced at an average API cost of $0.8-$1.5 per run. These results suggest that Human-AI collaborative pipelines may provide a practical path toward scalable empirical research.

arXivπŸ“ˆπŸ€–
HLER: Human-in-the-Loop Economic Research via Multi-Agent Pipelines for Empirical Discovery
By Zhu, Wang

11.03.2026 04:06 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
We develop a stochastic macro-financial model in continuous time by integrating two specifications of the Keen economic framework with a financial market driven by a jump-diffusion process. The economic block of the model combines monetary debt-deflation mechanisms with Ponzi-type financial destabilization and is influenced by the financial market through a stochastic interest rate that depends on asset price returns. The financial market block of the model consists of an asset with jump--diffusion price process with endogenous, state-dependent jump intensities driven by speculative credit flows. The model formalizes a feedback loop linking credit expansion, crash risk, perceived return dynamics, and bank lending spreads. Under suitable parameter restrictions, we establish global existence and non-explosion of the coupled system. Numerical experiments illustrate how variations in credit sensitivity and jump parameters generate regimes ranging from stable growth to recurrent boom--bust cycles. The framework provides a tractable setting for analyzing endogenous financial fragility within a mathematically well-posed macro--financial system.

We develop a stochastic macro-financial model in continuous time by integrating two specifications of the Keen economic framework with a financial market driven by a jump-diffusion process. The economic block of the model combines monetary debt-deflation mechanisms with Ponzi-type financial destabilization and is influenced by the financial market through a stochastic interest rate that depends on asset price returns. The financial market block of the model consists of an asset with jump--diffusion price process with endogenous, state-dependent jump intensities driven by speculative credit flows. The model formalizes a feedback loop linking credit expansion, crash risk, perceived return dynamics, and bank lending spreads. Under suitable parameter restrictions, we establish global existence and non-explosion of the coupled system. Numerical experiments illustrate how variations in credit sensitivity and jump parameters generate regimes ranging from stable growth to recurrent boom--bust cycles. The framework provides a tractable setting for analyzing endogenous financial fragility within a mathematically well-posed macro--financial system.

arXivπŸ“ˆπŸ€–
From debt crises to financial crashes (and back): a stock-flow consistent model for stock price bubbles
By Grasselli, Nguyen-Huu

11.03.2026 04:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Constructal Law states that a finite-size flow system that persists in time evolves its configuration so as to provide progressively easier access to the currents that flow through it. Classical Constructal theory derives hierarchical flow architectures from static resistance minimization under finite-size constraints, but many transport systems operate under irreversible limits that induce regime switching and discontinuous adjustment laws.
  We formulate Constructal evolution as an autonomous nonsmooth dynamical system. The architectural configuration is modeled as the state of a Filippov differential inclusion defined on a compact forward-invariant admissible set. Irreversible transport constraints generate switching manifolds across which the adjustment field is discontinuous. A resistance dissipation inequality encodes the Constructal principle of progressively improving access as a nonsmooth Lyapunov condition, while a uniform contraction assumption provides spectral bounds on the generalized Jacobians of the regime-dependent dynamics.
  Under these conditions we prove that the resulting inclusion admits a unique equilibrium architecture and that every admissible trajectory converges to it exponentially. Finite size, irreversibility, and resistance dissipation therefore imply existence, uniqueness, and global stability of persistent flow configurations without invoking static optimization.
  As an application, the classical area--to--point transport hierarchy of Bejan et. al. is embedded in the dynamical framework. The optimal assembly ratios appear as switching manifolds, while the classical scaling relations arise as sliding invariant sets of the Filippov inclusion. Their intersection defines the uniquely selected globally attracting architecture.

Constructal Law states that a finite-size flow system that persists in time evolves its configuration so as to provide progressively easier access to the currents that flow through it. Classical Constructal theory derives hierarchical flow architectures from static resistance minimization under finite-size constraints, but many transport systems operate under irreversible limits that induce regime switching and discontinuous adjustment laws. We formulate Constructal evolution as an autonomous nonsmooth dynamical system. The architectural configuration is modeled as the state of a Filippov differential inclusion defined on a compact forward-invariant admissible set. Irreversible transport constraints generate switching manifolds across which the adjustment field is discontinuous. A resistance dissipation inequality encodes the Constructal principle of progressively improving access as a nonsmooth Lyapunov condition, while a uniform contraction assumption provides spectral bounds on the generalized Jacobians of the regime-dependent dynamics. Under these conditions we prove that the resulting inclusion admits a unique equilibrium architecture and that every admissible trajectory converges to it exponentially. Finite size, irreversibility, and resistance dissipation therefore imply existence, uniqueness, and global stability of persistent flow configurations without invoking static optimization. As an application, the classical area--to--point transport hierarchy of Bejan et. al. is embedded in the dynamical framework. The optimal assembly ratios appear as switching manifolds, while the classical scaling relations arise as sliding invariant sets of the Filippov inclusion. Their intersection defines the uniquely selected globally attracting architecture.

arXivπŸ“ˆπŸ€–
Constructal Evolution as a Nonsmooth Dynamical System: Stability and Selection of Flow Architectures
By Stiefenhofer

11.03.2026 04:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
This paper studies optimal consumption and saving decisions under uncertainty about the transition dynamics of the economic environment. We consider a general optimal savings problem in which the exogenous state governing discounting, capital returns, and nonfinancial income follows a Markov process with unknown transition probability, and agents update their beliefs over time through Bayesian learning. Despite the added endogenous state from belief updating, we establish the existence, uniqueness, and key structural properties of the optimal policy, including monotonicity and concavity. We also develop an efficient computational method and use it to study how transition uncertainty and learning interact with precautionary motives and wealth accumulation, highlighting a dynamic mechanism through which uncertainty about regime persistence shapes consumption dynamics and long-run household wealth.

This paper studies optimal consumption and saving decisions under uncertainty about the transition dynamics of the economic environment. We consider a general optimal savings problem in which the exogenous state governing discounting, capital returns, and nonfinancial income follows a Markov process with unknown transition probability, and agents update their beliefs over time through Bayesian learning. Despite the added endogenous state from belief updating, we establish the existence, uniqueness, and key structural properties of the optimal policy, including monotonicity and concavity. We also develop an efficient computational method and use it to study how transition uncertainty and learning interact with precautionary motives and wealth accumulation, highlighting a dynamic mechanism through which uncertainty about regime persistence shapes consumption dynamics and long-run household wealth.

arXivπŸ“ˆπŸ€–
Optimal Savings under Transition Uncertainty and Learning Dynamics
By Ma, Zhang

11.03.2026 03:56 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0