Edition 016 • April 19, 2026

The Credibility Report

Actuarial Intelligence for Insurance Professionals

What’s in this edition

Primary-source market updates (no aggregator links) plus the latest actuarial-relevant arXiv papers (score ≥ 15, last 14 days).

📰 Headlines (primary sources)

Travelers stock flat, Marsh gets a bump as Q1 insurance earnings kick off - S&P Global

Read source → • S&P Global

Illinois Storms Highlight Mounting Severe Weather Losses

Read source → • Triple-I

Welcome Back, BRIC

Read source → • Triple-I

Dog-Related Injury Claims on the Rise in 2025

Read source → • Triple-I

Convective Storm Losses: Historic 3-Year Streak

Read source → • Triple-I

🔬 Research Spotlight (arXiv)

Evaluating the impact of longitudinal treatment strategies in the presence of informative monitoring and time-dependent confounding

arXiv • Score: 18 • 2026-04-10

Routinely collected data from electronic health records (EHR) provide opportunities to study effects of longitudinal treatment strategies in real-world clinical settings. A challenge presented by EHR data is that frequency of covariate monitoring differs by patient, covariate type and over time, and may be informative about a patient's health status. Many causal inference methods assume measurements of covariates are observed at a common set of regular time points. In this paper we describe and evaluate methods for estimating causal effects of longitudinal treatments on time-to-event outcomes in the presence of informative monitoring of time-dependent confounders. We show how methods based on inverse probability weighting, G-computation and longitudinal targeted maximum likelihood estimation (TMLE) can be adapted to allow for informative monitoring by incorporating monitoring indicator variables as additional time-dependent confounders. We evaluate these methods using a simulation study, comparing against more simple approaches that ignore monitoring variables. We demonstrate that ignoring monitoring can result in biased estimates of treatment effects. The methods are illustrated through an investigation into the effect of early versus delayed initiation of invasive mechanical ventilation on mortality of intensive care patients using routinely-collected data from an intensive care unit. We consider static treatment strategies such as `always treat' and `never treat' but also generalise to treatment strategies that allow for flexibility in the exact initiation time and duration of treatment.

Open paper →

$α$-robust utility maximization with intractable claims: A quantile optimization approach

arXiv • Score: 17 • 2026-04-06

This paper studies an $α$-robust utility maximization problem where an investor faces an intractable claim -- an exogenous contingent claim with known marginal distribution but unspecified dependence structure with financial market returns. The $α$-robust criterion interpolates between worst-case ($α=0$) and best-case ($α=1$) evaluations, generalizing both extremes through a continuous ambiguity attitude parameter. For weighted exponential utilities, we establish via rearrangement inequalities and comonotonicity theory that the $α$-robust risk measure is law-invariant, depending only on marginal distributions. This transforms the dynamic stochastic control problem into a concave static quantile optimization over a convex domain. We derive optimality conditions via calculus of variations and characterize the optimal quantile as the solution to a two-dimensional first-order ordinary differential equation system, which is a system of variational inequalities with mixed boundary conditions, enabling numerical solution. Our framework naturally accommodates additional risk constraints such as Value-at-Risk and Expected Shortfall. Numerical experiments reveal how ambiguity attitude, market conditions, and claim characteristics interact to shape optimal payoffs.

Open paper →

Dividend ratcheting and capital injection under the Cramér-Lundberg model: Strong solution and optimal strategy

arXiv • Score: 17 • 2026-04-06

We consider an optimal dividend payout problem for an insurance company whose surplus follows the classical Cramér-Lundberg model. The dividend rate is subject to a ratcheting constraint (i.e., it must be nondecreasing over time), and the company may inject capital at a proportional cost to avoid ruin. This problem gives rise to a stochastic control problem with a self-path-dependent control constraint, costly capital injections, and jump-diffusion dynamics. The associated Hamilton-Jacobi-Bellman (HJB) equation is a partial integro-differential variational inequality featuring both a nonlocal integral term and a gradient constraint. We develop a systematic probabilistic and PDE-based approach to solve this HJB equation. By discretizing the space of admissible dividend rates, we construct a sequence of approximating regime-switching systems of ordinary integro-differential equations. Through careful a priori estimates and a limiting argument, we prove the existence and uniqueness of a \emph{strong solution} in a suitable space. This regularity result is fundamental: it allows us to characterize the optimal dividend policy via a switching free boundary and to construct an explicit optimal feedback control strategy. To the best of our knowledge, this is the first complete solution -- comprising both the value function and an implementable optimal strategy -- for a dividend ratcheting problem with capital injection under the Cramér-Lundberg model. Our work advances the mathematical theory of optimal stochastic control beyond the standard viscosity solution framework, providing a rigorous foundation for dividend policy design in economics.

Open paper →

Generative Augmented Inference

arXiv • Score: 15 • 2026-04-16

Data-driven operations management often relies on parameters estimated from costly human-generated labels. Recent advances in large language models (LLMs) and other AI systems offer inexpensive auxiliary data, but introduce a new challenge: AI outputs are not direct observations of the target outcomes, but could involve high-dimensional representations with complex and unknown relationships to human labels. Conventional methods leverage AI predictions as direct proxies for true labels, which can be inefficient or unreliable when this relationship is weak or misspecified. We propose Generative Augmented Inference (GAI), a general framework that incorporates AI-generated outputs as informative features for estimating models of human-labeled outcomes. GAI uses an orthogonal moment construction that enables consistent estimation and valid inference with flexible, nonparametric relationship between LLM-generated outputs and human labels. We establish asymptotic normality and show a "safe default" property: relative to human-data-only estimators, GAI weakly improves estimation efficiency under arbitrary auxiliary signals and yields strict gains whenever the auxiliary information is predictive. Empirically, GAI outperforms benchmarks across diverse settings. In conjoint analysis with weak auxiliary signals, GAI reduces estimation error by about 50% and lowers human labeling requirements by over 75%. In retail pricing, where all methods access the same auxiliary inputs, GAI consistently outperforms alternative estimators, highlighting the value of its construction rather than differences in information. In health insurance choice, it cuts labeling requirements by over 90% while maintaining decision accuracy. Across applications, GAI improves confidence interval coverage without inflating width. Overall, GAI provides a principled and scalable approach to integrating AI-generated information.

Open paper →

Until next time—stay credible.

— The Credibility Report

Edition 016 | Prepared April 19, 2026 (UTC)