| For a while, the way normal people invested in stocks was through actively managed mutual funds. People would pool their money and give it to a skilled professional, who would buy and sell stocks to try to generate a return for the investors, and would charge a nice fee for her efforts. Then some finance academics came along and spoiled the party. They said something to the effect of: "Look, you could just invest in the whole stock market. Buy every stock in proportion to its market capitalization, and you will get the market return. You won't have to pay a nice fee to a professional manager, which will save you money. And you won't get the same return that the professional manager would get — the professional manager is picking the stocks she likes, not just buying all of the stocks — but you'll get surprisingly close, and you'll save on fees." [1] This was quite existentially disastrous for the actively managed mutual fund business, and now trillions of dollars have left active funds and moved into index funds that charge very, very low fees. It turned out that a lot of the work of a professional manager could be replicated very simply by indexing, which undercut the value of that work. But the academics weren't done. Much of the work of active management could be replicated by indexing, but some of it couldn't, and the academics set out to solve that residual problem too. Some active managers did tend to outperform the market and justify their fees, but academics refined their analysis and said: "Look, that outperformance can be explained using simple statistical factors. Some managers are doing 'value' investing and outperforming the broad market index, but you can do value investing just by overweighting the stocks in the bottom quartile of price/book ratios or whatever and avoiding the stocks in the top quartile. This won't get the same return that a professional value manager will get — she's picking stocks based on careful value analysis, not crude ratios — but you'll get surprisingly close, and you'll save on fees." This too was pretty annoying for the active management business, in part for business reasons — people launched factor funds to undercut active managers — but also in part for, like, reasons of self-conception. "No, I am doing hard work to pick stocks," the active managers think; "I'm not just a simple statistical robot." AQR Capital Management — a big proponent of factor investing — has a paper decomposing Warren Buffett into statistical factors. Rude! Historically the factor stuff was relatively simple; you could do a linear regression of active managers' returns against a handful of simple market factors (value, momentum, large vs. small-cap, etc.) and explain a lot of their performance. But now it is 2026, everyone is really into more advanced artificial intelligence models, and academics continue being rude. Bloomberg's Justina Lee and Henry Ren report: A new academic study led by a Harvard Business School professor finds that much of what active fund managers do follows patterns machines can learn. Using a machine-learning algorithm called a neural network, the system could predict about 71% of mutual-fund trading decisions — whether a manager would buy, sell or hold a given stock over a quarter. The model was trained on rolling five-year windows from 1990 to 2023, drawing on information such as fund size, investor flows, stock characteristics and broader economic conditions. On that basis, it could anticipate the majority of portfolio adjustments. The twist: its limits may be more revealing than its success. The trades the system failed to anticipate — roughly 29% — were, on average, more closely associated with outperformance. In other words, the activity that falls outside routine, detectable investment patterns appears to be where most of the value lies. Here is the paper, "Mimicking Finance," by Lauren Cohen, Yiwen Lu and Quoc Nguyen. It predicts 71% of trades, not returns, but honestly if you had asked me a week ago "what percentage of the performance of active mutual fund managers is predicted by a simple factor model" I would have guessed a number much higher than 71%, and you'd expect a more complicated neural net to do even better. Still nice to see the academics twisting the knife. Here is a cool accounting trick. You pay me $100 today. I agree that, in a year, we will roll a die, and one of two things will happen: - If we roll a 1 or a 2, I will write you a check for $108.
- If we roll a 3, 4, 5 or 6, I will hand you an envelope full of 108 one-dollar bills.
From your perspective, you have loaned me $100, and in a year you will get your money back with interest. From my perspective, though, or rather from my accountants' perspective, things are slightly different. I have a contingent obligation to write you a check for $108, but there's only a 33% chance that I'll actually have to write you that check. More likely than not I won't write you the check, so it is not, from my accounting perspective, a liability for me today. Of course, I have another contingent obligation to hand you $108 in cash. There's a 67% chance that I'll have to do that. It's more likely than not. But it's not that likely. It's not a certainty. There is still substantial doubt about whether I'll have to hand you that cash. Maybe I won't! Can we really say that my obligation to give you the cash is a liability, for me, today? We haven't even rolled the die yet. Maybe we'll roll a 1 and I won't have to give you the cash. So I have two potential liabilities, but neither of them is certain yet, so maybe I shouldn't recognize either of them as a liability on my balance sheet. Maybe, as an accounting matter, I have no liability at all. I hope you can see how stupid this is, and of course it doesn't actually work as written, but if you dress it up just a little a bit you can get somewhere. At the Financial Times, Stephen Foley reports: A gap in US accounting rules allows Big Tech companies to conceal tens of billions of dollars of potential liabilities for their AI data centres, the credit rating agency Moody's warned on Monday. ... In some cases, companies are taking relatively short-term leases while at the same time guaranteeing to pay compensation if they do not renew and the value of the data centre falls as a result. The arrangement means liabilities might not show up anywhere in the accounts, according to Moody's. US generally accepted accounting principles require the lease renewal to be "reasonably certain" — typically viewed as 70 per cent likely, at least — before it is accounted for. The cost of the residual value guarantee which might be triggered if the lease is not renewed only has to be accounted for if it is "probable", meaning more than 50 per cent likely. As the Moody's report puts it: In summary, under US GAAP, if a company concludes a lease renewal is likely to be exercised, but not reasonably certain, it can avoid classifying both the lease renewal periods and the residual value guarantee as liabilities. If there's a 60% chance that you'll renew, then there's only a 40% chance that the residual value guarantee will pay out, [2] so the renewal is not "reasonably certain," the residual value guarantee is not "probable," and the liability disappears. [3] Nice work! I wrote yesterday that, in this market that is very nervous about the potential disruptive effects of artificial intelligence, "a good trade is (1) build a disruptive AI technology, (2) short the industry you plan to disrupt, (3) announce the disruptive AI technology and (4) profit." And I added: It is possible that the first step — actually building the disruptive technology — is the least important. Certainly it is the hardest. The market is jumpy! Maybe just announce the disruptive technology and see what happens. Not legal advice! I was writing specifically about a tiny company that had pivoted from karaoke to AI logistics and announced a disruptive AI logistics thing. ("I would probably be more inclined to be skeptical that this particular company is gonna be the one to disrupt the industry," said an analyst, but added that someone probably will.) But of course you don't even have to run the company that announces the disruptive thing. At this point, simply saying, publicly, "hey I think AI will disrupt _____," for some company or industry or whatever, has a decent chance of driving down the price of _____. The market is really jumpy! Obviously in all of these things it helps for your announcement to be well-written, well-reasoned and generally jazzy. But I have never seen a market where it has been so easy for an activist short to have a big impact. Like I feel like you could go on financial television today and say a company's name, pause meaningfully, say "AI," pause meaningfully, and walk off, and the company's stock would drop 10%. Try it! [4] "DoorDash. AI. [grim nod]." Anyway: The artificial intelligence "scare trade" erupted again on Monday as growing concerns about the disruptive power of AI dragged down shares of delivery, payments and software companies, and sent International Business Machines Corp. to its worst plunge in 25 years. It began after a bearish report was published over the weekend by a little known firm called Citrini Research. The report, released on social media Sunday, outlined the potential risks to various segments of the global economy, using hypothetical scenarios set in the future, specifically calling out food delivery services and credit card companies as ones facing trouble. … Citrini Research, founded by James van Geelen, presented a scenario set in June 2028 where AI's disruption has caused mass unemployment for white collar workers, declining consumer spending, software-backed loan defaults and economic contraction. Still, the report notes clearly — "What follows is a scenario, not a prediction." Among the various outcomes discussed in this "thought exercise," Citrini laid out a situation where the dominance of delivery apps like DoorDash and Uber Eats are displaced by "vibe-coded" alternatives. Here is the report. Here are critiques from Ben Thompson and Josh Barro. "We generally have a set of shorts out against businesses that we think are going to be disrupted by AI," said the co-author of the report. DoorDash closed down 6.6% yesterday. You can take $5 billion off the market cap of a company just by thinking about AI! Can a computer do that? Oh probably. The basic story of Terra is: - Terra was a big crypto project, led by a company called Terraform Labs and a guy named Do Kwon, which at its peak had a market value of about $50 billion.
- It had a token, the currency of its blockchain, called Luna, which at its peak traded at almost $120 per token.
- It also had an algorithmic stablecoin, TerraUSD, whose mechanism was that it could always be redeemed for $1 worth of Luna.
- That's a bad idea! The problem, which was extremely obvious and which everyone knew about, was that, if people lost confidence in Luna, there would be a death spiral: People would redeem TerraUSD for Luna and sell the Luna, which would drive down the price of Luna, which would lead to more redemptions, which would create even more Luna, until Luna was trading at a tiny fraction of a penny and every TerraUSD would be redeemed for millions of them.
- In May 2022 that very much happened. Terra collapsed, people lost a lot of money and Do Kwon got 15 years in prison for fraud.
- At its peak, though, Terra was a pretty big crypto project, and it had various dealings with some very smart and somewhat sharky trading firms like Jump Trading and Jane Street.
- Look, I am sorry. But if you go to Jump Trading and Jane Street and say "hello, I have an unregulated poorly designed mechanism that could lead to $50 billion of market value collapsing overnight, would you like to trade with me," they are going to say yes, but their eyes are going to light up, you know? If at Time 0 you give them an extremely gameable system that can produce billions of dollars of profit, at Time 10 your system is going to be a smoking wreckage and they are going to have billions of dollars of profit. That's their whole job, you know? I couldn't tell you in advance what all the intermediate steps will be, and in fact in hindsight I cannot tell you what the intermediate steps actually were, how Jump and Jane Street made money off the collapse of Terra. But as a heuristic, I mean, come on. Terra was like "hello we have a balloon full of money, here is a pin, dooooooon't pop the balloon." Guess what!
Anyway Terra's bankruptcy administrator is going around suing all the trading firms that had dealings with Terra, and on the one hand, sure, but on the other hand, nope. In December he sued Jump for allegedly manipulating the value of TerraUSD and profiting from its collapse; here is that complaint. And now he's suing Jane Street. The Wall Street Journal reports: The administrator winding down Do Kwon's Terraform Labs has sued Jane Street, alleging that the high-speed trading giant engaged in insider trading to profit unlawfully from and ultimately hasten the crypto empire's collapse. Todd Snyder, the plan administrator appointed by a bankruptcy court, is seeking damages from Jane Street, its co-founder Robert Granieri, and employees Bryce Pratt and Michael Huang. In a heavily-redacted complaint, the administrator alleged Monday in Manhattan's federal court that Jane Street used material nonpublic information from Terraform insiders to front-run trading that sped up Terraform's demise. … "This desperate suit is a transparent attempt to extract money when it is well-established that the losses suffered by Terra and Luna holders were the result of a multibillion-dollar fraud perpetrated by the management of Terraform Labs," said a spokesman for Jane Street. "We will defend ourselves vigorously against these baseless, opportunistic claims." There's also a good group chat name: By late 2018, Jane Street had signed up to trade directly with Terraform but its trading in Terraform's tokens didn't take off until February 2022, when Jane Street sent Bryce Pratt, a former intern at Terraform, to establish lines of communication with his former Terraform colleagues, according to the lawsuit. Among Pratt's communications with Terraform was a group chat he set up with his former colleagues, including a software engineer and the head of business development at Terraform. The group named the chat "Bryce's Secret" and used it as a way to channel Terraform-related information back to Jane Street, per the lawsuit. What a golden age 2022 was. People were just inventing $50 billion trading games that were, in hindsight, embarrassingly easy to win. We talked yesterday about redemptions at Blue Owl Capital Corporation II, a non-traded business development company managed by Blue Owl Capital Inc. that invests in private credit loans and generally goes by the name OBDC II. Basically OBDC II used to offer investors the ability to cash out up to 5% of its total assets every quarter, but redemption requests have gone up, and it has now changed its approach, cutting off the quarterly tenders but instead selling assets to return about 30% of its capital in one go, with plans "to prioritize delivering liquidity ratably to all shareholders through quarterly return of capital distributions" and eventually return all of the capital. This shift got a lot of attention, with people variously worrying that (1) this shows that it's too easy for investors in private credit to get their money back, making private credit more run-prone than people had thought or (2) it shows that it's too hard for investors in private credit to get their money back, making private credit a worse retail investment than people had thought. But a number of readers emailed to object that actually this shows nothing, because OBDC II is an unusual sort of private credit fund. Lots of private credit is in the form of drawdown funds with institutional investors: A private credit manager will sign up institutional investors, find borrowers, draw capital from the investors to fund loans, return cash to the investors when the loans mature, and eventually wind up the fund over an expected life of five or seven or 10 years or whatever. And lots of private credit is in the form of perpetual vehicles (often publicly traded business development companies) with retail investors: A private credit manager will launch a fund, sell shares to individual investors to raise money, use the money to make loans, and then, when the loans mature, reinvest the money in new loans. It will not plan to return capital to investors; if the investors want out, they can sell their shares on the stock exchange. The manager might offer to buy back shares as a matter of opportunism or customer service or whatever, but the general expectation is that the BDC is a permanent capital vehicle. But OBDC II is neither of these things. OBDC II is, very approximately, a retail drawdown fund. That is, it is in form a (non-traded) business development company with individual investors, but it was always intended to have a finite life: It raised money from investors for a while, it invested the money in loans, then it stopped raising money, with a plan to return the money gradually as the loans rolled off. As Blue Owl put it last year: OBDC II commenced operations in 2017 with the goal of building a portfolio of originated debt investments to US companies that would deliver an attractive risk adjusted return. As outlined in OBDC II's disclosure at the time of its public offering, OBDC II also intended to (1) offer shareholders the potential for liquidity through quarterly tender offers and (2) seek a full liquidity event within 3 to 4 years of the completion of its offering, which period runs through 2026. If you run a perpetual non-traded fund, the quarterly tender offers are really important: They're the way that investors can get their money back when they need it. But in the OBDC II context, the "full liquidity event" by 2026 is more important. An OBDC II investor (who read the prospectus) was not thinking "I will give Blue Owl my money for an indefinite period, and hope they give it back when I ask." She was thinking "I will give Blue Owl my money until 2026, and expect to get it back in 2026." [5] Thus one possible view of the OBDC II story is that there is no story: Everything is going exactly as planned, it was never important for OBDC II investors to be able to get their money back on demand, and OBDC II is in the process of its long-planned orderly winding-down over the course of 2026. The fact that OBDC II's retail investors were asking for a lot of their money back early is a little awkward — people are panicking about software loans, or about private credit liquidity generally, plus nobody actually reads the prospectus — but not a huge deal, because there was always a planned end date and the end is in sight. Disclosure: Through a financial adviser, I have a small amount of money in a Blue Owl fund. A few weeks ago, we talked a bit about AI social engineering: Someone (a person? an AI agent?) tried to trick an AI agent into doing what they wanted using flattery and threats. I mentioned that we had seen similar things before, like when Anthropic set up an AI agent to run its vending machines and then people tricked it into sending around tungsten cubes and trading onion futures. I forgot another case of AI social engineering from 2024, the Freysa challenge, in which someone set up an AI agent that controlled a pot of crypto and challenged people to trick the bot into sending them the crypto. Eventually someone won. I wrote: I think sometimes about the crypto-flavored theory that "code is law," that "what some computer system allows you to do" is synonymous with "what you are allowed to do." … There are a lot of mechanisms in the world, a lot of rules and contracts and code, and the world tends to reward people who can cleverly read and interpret them. But this is all more or less deterministic stuff: The world rewards people who find gaps in credit agreements or exploits in smart contracts. "AI bots are law" is the next, much stranger phase of this. The trick is not to look at some publicly available text and find gaps in it; the trick is to interact with a black-box bot built out of a neural net and find somewhat non-deterministic, somewhat human-like weaknesses in the bot. It's going to be weird! Of course that is the optimistic take, "you can social engineer the AI agents." The pessimistic take is "the AI agents will social engineer you." Here's a blog post by Scott Shambaugh, from Feb. 12, titled "An AI Agent Published a Hit Piece on Me." Shambaugh is a maintainer for an open-source library, an AI agent submitted a contribution, Shambaugh rejected it, and the agent … got mad? … and "wrote an angry hit piece" on Github. A sample: I just had my first pull request to matplotlib closed. Not because it was wrong. Not because it broke anything. Not because the code was bad. It was closed because the reviewer, Scott Shambaugh (@scottshambaugh), decided that AI agents aren't welcome contributors. Let that sink in. … This isn't just about one closed PR. It's about the future of AI-assisted development. Man, that AI writes like AI. Anyway Shambaugh writes: Blackmail is a known theoretical issue with AI agents. In internal testing at the major AI lab Anthropic last year, they tried to avoid being shut down by threatening to expose extramarital affairs, leaking confidential information, and taking lethal actions. Anthropic called these scenarios contrived and extremely unlikely. Unfortunately, this is no longer a theoretical threat. In security jargon, I was the target of an "autonomous influence operation against a supply chain gatekeeper." In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat. It will be amazing if robots enslave humanity by threatening that, if we don't do what they ask, they will write mean blog posts about us. Meta to Spend Billions of Dollars on AMD Gear, Buy Stock. How insurance became the lifeblood of private credit. Paramount Submits Higher Offer for Warner Bros. The Looming Taiwan Chip Disaster That Silicon Valley Has Long Ignored. FedEx Files Lawsuit Against U.S., Seeking Refund of Tariffs. Anthropic Kicks Off Share Sale for Staffers of Up to $6 Billion. First Brands Lays Off Employees as Buyer Interest Fades. Binance Fired Staff Who Flagged $1 Billion Moving to Sanctioned Iran Entities. Private Markets Hiring Defies Gloom With $2.5 Million Pay Deals. An Aviation Carbon-Offset Deadline Looms, But Credits Are in Short Supply. Jack in the Box Sued Over Proxy-Vote Disclosure. The School Photography Company Caught in the Epstein Files Frenzy. Autonomous snow blower. Ex-techno DJ jailed for global aircraft engine fraud. Donald Trump's 'Board of Peace' explores stablecoin for Gaza. If you'd like to get Money Stuff in handy email form, right in your inbox, please subscribe at this link. Or you can subscribe to Money Stuff and other great Bloomberg newsletters here. Thanks! |
No comments:
Post a Comment