Sunday, February 1, 2026

The new ESG: attention addiction?

Classic ESG focuses on companies who are polluting the environment (think industrials, coal plants, etc.), with the idea of either (a) filtering out heavy polluters or (b) investing in renewables. It feels like decades of environmental pollution in the US -- and smog-filled cities, birth defects, etc. -- have slowly shifted the nationwide consensus to "minimize environmental harm." 

A long aside: Never mind a whole separate conversation of whether it's better to simply divest from the biggest polluters or engage with them; Yale professor Kelly Shue argues if we actually care about reducing absolute emissions, it's counterproductive to choke off funding to these "brown" firms. It reminds me of the wider political discourse -- is it better to cut every Trump voter out of your life or try to actually engage with a few of them? The former feels better short-term (to the individual), but the latter is better for the long-term health of the nation.

Some of the biggest public companies of the 1980s -- Exxon, GE, Chevron, DuPont, GM, Ford, Philip Morris, etc. -- were in oil and industrials. It took a few decades for the political and investing appetite to catch up to these companies. We now have a whole industry -- frameworks, consultants, you name it -- around mitigating environmental "risk" in the portfolio.

The giants of the 2010/2020s weigh heavily tech -- Apple, Microsoft, Google/Youtube, Facebook, Amazon. We already know there's a huge problem with social media addiction, and Australia has recently proposed banning social media for kids under 16. But ... I haven't seen or read much at all about "anti-virality" ESG investing. The "environmental detritus" of social media addiction is hard to see and hard to measure. But yet all the largest social media sites (be it TikTok, Instagram, Reddit, Facebook, or any others) are incentivized to keep you addicted to the platform, because more average time spent on the app means more ads served, which means more revenue. 

In my view, the problem is not just in the addiction itself but in the example it sets to start-ups. If all the biggest players are competing on keeping your attention, how can a new start-up break in without trying to make something flashier, more attractive, more addictive? Does social media become a race to the bottom? And more importantly, if you are an investor who cares about investing responsibly (i.e. more than just what's in an ESG framework), can you feel good about investing in these types of start-ups or the VCs who back them?

Ten years from now, I think we will look back and realize that a lot of this tech -- which are often great financial investments! -- is socially harmful, and I hope we will have the language, metrics, and frameworks to justify divestment or substantial engagement. I think the question of whether future investors should not invest in start-ups because of their harmful effects on society (social media addiction, but also prediction aka gambling markets, etc.) will always be a contentious discussion, but I think recognizing it as the "new ESG frontier" will push us to ask the right questions and collect the right metrics, especially as tech plays an increasingly outsized impact on society as a whole.

Thursday, January 29, 2026

On CT Innovations

I've been at CT Innovations (CI) for a few months now, and it's as good a time as any to jot down some pros and cons (which are mostly just tradeoffs) at the company. Nothing should be surprising or what you can get elsewhere on the internet, but still good to put it all in one place for myself. (My vantage point: someone newer to VC, with 8 years working for a large company and an MBA.)

Strengths

  • People - the team is welcoming, and most of the directors/partners have worked at a start-up or in industry. Most have been at CI for a while, which is great for continuity of knowledge, but also makes CI a final career landing location (which helps with the sharing of deals, etc.)
  • Connection to the state - as someone who's grown up in the state, I've been fascinated by what makes Connecticut attractive to businesses. It's also been great to see the connections with Yale, UConn, and other schools in the state.
  • More creative capital - While the bulk of the portfolio is bread-and-butter early-stage start-up deals, CI also (a) makes technical assistance grants, (b) tries to invest locally, and (c) has a robust venture debt portfolio. Having the option for venture debt makes you think a little more not just about "is this a company we should invest in" but also "is equity the correct tool to use."
  • Momentum in the state - ClimateHaven was founded in New Haven in 2022 to support climate tech, and the governor announced a recent initiative for $121M investment into building out QuantumCT. While the state will fall short in certain areas (e.g. it's hard to compete with SF in pure tech), there continues to be interesting, credible areas that the state is investing in.
  • Breadth of companies, stages, and opportunities - CI covers bio/life sciences, consumer, tech, and climate, which is excellent for learning a little bit about nearly everything. CI's bread-and-butter is early-stage (pre-seed/seed, with follow-ons), but also willing to invest in later stages. They also have a (small) fund-of-funds portfolio with venture funds, an area of diligence attractive to me.

Weaknesses/ Trade-offs

  • Harder to get depth into an industry - the tradeoff of breadth is that it's hard to find the time to go be a world-expert on any one thing. For example, some VCs might have their advantage be deep expertise on the cybersecurity market; that type of depth is harder to get at CI. (Again, for better and for worse.)
  • Pseudo-public - Results in small things (like being beholden to FOIA) and larger things (like lower salaries that are publicly available).

Friday, January 16, 2026

Is AI more than just automation?

I'm generally skeptical about both (a) new things and (b) lines (i.e. lines for viral restaurants, etc.), and I've always wondered: is AI more than just automation? I've started to think about AI on a spectrum:

  • Is the AI simply automating a previous task, or does it have some new performance edge?
    • Another way of thinking about this: is AI simply part of the software stack, or is there something about it that paints the AI as a feature?
    • (I've finished up watching Culinary Class Wars, so yet another analogy: 
  • Is there some proprietary data or info that the automation can build on? 
    • My sense is that every AI-related start-up must have some proprietary data
  • Could the "AI" have existed before LLMs (old-fashioned machine learning), or was it enabled by LLMs?
A few investment angles that I've been thinking about this:

Evaluating early-stage AI start-ups
I've spent the past few months looking for tech/AI start-ups, and it feels like it's been hard to find early-stage start-ups doing cool new things in AI. To be more clear, I've had trouble finding start-ups with "performance edge" AI; there is so much money and talent migrating to the MAG 7+ that the remaining cutting-edge AI research feels too small and not enough to build a company around. There have been far, far more start-ups looking to do simple automation, using commonly available tools (e.g. LLM APIs, RAG, finetuning) with some "proprietary" twist. The challenge is then purely go-to-market -- can the start-up accelerate market share with good-enough retention. (I.e. nothing new here with AI: it's the age-old business adage that the best business doesn't aways win.)

Automation vs. novel AI score: mostly automation
Proprietary data: deep knowledge of workflows being automated
LLM enablement: LLMs advantage of prior machine learning is the ability to process language, and my sense is that very few use cases rely on taking in or outputting language (i.e. low LLM enablement).


CRM: managing inbound and outbound emails
Our firm recently installed a new CRM, and it tracks inbound and outbound emails centrally. This is great to be able to see: did someone reach out to this start-up already (even if a couple years ago)? (I liked it so much that I finally caved and started using a personal CRM.)

In the current state, this is just automating workflows -- no AI required! You could hook into the Gmail APIs to extract emails and store them in a database, sorted by contact. (I started doing this, before deciding that I was fine paying $12/month for this service.) The core CMS service in both cases is not AI -- it is storing contacts, emails, and interactions in a centralized repository for the whole company to see.

Nevertheless, AI can be layered on top of this. For example, I've started to like the meeting note summary feature, which saves me from having to transcribe my notes. Now that the data is all there, who knows what other big data things it can do; perhaps it can tell me that I ask worse questions in the late afternoon, or to follow up with X company because they were just in the news. 

Automation vs. novel AI score: mostly automation
Proprietary data: all your emails, all your notes, all your contacts
LLM enablement: meeting note recording and summary not possible before LLMs


Investment research
There's a gigantic market for investment research and investment data -- AlphaSense, Pitchbook, Crunchbase, Bloomberg, etc. etc. Each of them is hoping to write significant portions of the investment memo for you, and given their access to proprietary data, it's hard for any individual investment office to outcompete them.

Automation vs. novel AI score: mostly automation
Proprietary data: company docs (10-Ks, 10-Qs, etc.), earnings transcripts, expert review calls, market reports (sell-side and buy-side), clean public market data, etc.
LLM enablement: ability to process data and generate reports significantly accelerated by LLMs


= = =

If you're not working in the investment world, you have a general sense that there's a ton of data that the firm has ... but no real sense of how it could be used. Now that I'm in a direct investing role, I get to experience some of the manual work pain that will inevitably inspire future automation. 

Now, onto some AI features I'd love to see in my investment processes. Right now, these are just ideas, which would need to be fleshed out, prioritized, and developed. They fall under the "mostly automation" bucket, using the GP/LP's proprietary data, and are enabled by LLMs.
 

Scenario: for a start-up thinking about a new revenue partnership, I had to figure out the cap table, convertible notes structure, and other financial info. 

Easily digestible history of events
In order to figure this out, I had to cull through our Sharepoint's morass of start-up board meetings, capital calls, legal docs, etc. It seems like all this info should be readily available, though. 

This scenario reminds me of a project I initiated back at Epic. The data for prescription was stored across a few different "masterfiles," so it was hard to have a unified time series of all actions (e.g. pharmacist actions, adjudications, inventory, interface messages, etc.). I rolled these all up into one table -- a medium-sized project with a nice long-term payoff (that continues to pay dividends!). 

These documents should be the same. As they come in via email, they should be summarized and added to a status history table. For example:
  • "1/16/2025 - 2024 Q4 Board Meeting"
  • "4/15/2025 - Series A initial close - $3.4M raise ($10.6M pre; $14.0M post; 6% pref dividend; major investors include ABC and XYZ)"
  • Etc.
You'd also want a link to the source docs (in case you need to dig into details of the round). For LPs, this also would extend into digesting and parsing a wide array of other docs -- capital call notes, performance updates, etc. 

Organize all the docs
If AI can read and understand the summary of the docs, do we still need to maintain folder structures like we do today? Maybe not. Maybe you can just dump all the files into a single place, and AI does the sorting for you. Where this might be great: a VC fund's board meeting deck that includes details on a a few portfolio companies. You want the doc in both places, -- and maybe AI allows that.

Ask questions of the docs
The classic LLM application: be able to ask questions across a set of documents. In the above example, be able to ask things like "does this round have redemption rights" or "what is the interest on the notes, has it changed, and where is the evidence." (The natural question then becomes: can we preempt these questions and present this info ahead of time? Probably depends on use case.)

Importable into pre-built Excel workbook
This seems to be the holy grail of finance AI (see: OpenAI's foray into financial services), but you'd ideally want to be able to take all these numbers and plug them into an Excel workbook in a logical way. Still pie-in-the-sky for this scenario (it'd have to work through convertible note conditions, pref dividends, redemption rights, liq pref, etc.), but likely more feasible for public companies (and their 10-Qs). 

This use case seems to either require (a) a super "smart" AI to be able to build in all these scenarios or (b) good starting templates that the AI simply plugs into. My bet is that (b) comes long before (a). 

Friday, January 9, 2026

LLMs: the Wizard of Oz, excellent intern, but not all-knowing

The most recent Yale Alumni Magazine's focus was AI, from the student's and professor's point of view. The overarching questions still have no clear answers: how much should I use AI? And what should I use it for? The general take that I agreed with: the point of coursework (like papers) is not the paper itself, but the thinking that goes into it -- the paper is just a tool to get you to concentrate your effort! However, for other real-world use cases, the artifact is the end goal. For example, writing federal grants, generating insurance appeals, summarizing lengthy documents -- 

My current analogies for LLMs: 

Excellent intern

If you give an LLM a discrete task with clear instructions, it will mostly do a great job (e.g. write code to do a specific task, generate a haiku on X topic, etc.). However, just like an intern, the LLM has little context for anything, so you often have to be super detailed. (It reminds me of a 2nd grade writing assignment of "write instructions on making a PB&J" -- surprisingly hard! And then scale that up to a mildly hard task ...) If you think college coursework as a way to just complete assignments, then LLMs-as-an-intern look like an excellent fit.

Wizard of Oz

LLMs possess knowledge of seemingly everything ... but we all kind of know it's a facade by now ... but it also has a lot of knowledge! I've called this the riddle of the two guards before, but the idea is the same: you can ask any question, but you're not 100% sure if it's telling the truth or making something up. Nevertheless, if you need to get up to speed on a topic quickly (and can be 80% correct about it), or have weird follow-up questions on the info it returns to you, or need someone to help re-explain topics with you (e.g. I "chatted" with it to get a good conceptual understanding of diluted EPS calculations), or if you need someone to come up with counterfactuals, etc. etc. -- there's really no better, cheaper option than LLMs today. But -- you still need to be aware that the Wizard doesn't actually exist.

Not all-knowing

The LLM-as-as-intern and LLM-as-a-wizard seem like it'd cover 95% of use cases ... but work in the real world, and there's a gigantic chasm between the two. The biggest downfalls: (1) LLMs do not have access to all the information and (2) humans are complex. In software land, once you get from writing code to architecture, decisions are a series of trade-offs. For example, if we choose option A, we'll have to do more maintenance ourselves, but have lower expenses and be less locked into a specific provider; option B, the opposite. In venture land, people pick up on small things -- like whether you feel like you can trust the person, how they answered a particular question, whether their hearts are truly in the start-up -- that live solely in your head. In personal life, LLMs should not intermediate conversations between me and my wife; it lacks years and years of historical interactions (both between each other and other people) that, again, reside only in our heads. 

Now, the takeaway might just be it's a solvable data collection problem (e.g. all your thoughts can be downloaded with Neurolink to a central repository) and that LLMs/AI will dominate over humanity ... but I find it increasingly hard to believe. In my experience, people like feeling useful, and people like interacting with other people in person (and need it! see: the pandemic). But, LLMs (and more broadly AI) are excellent -- both higher quality and cheaper -- at the things listed above, but they have their limits. The hard part now, I think, is to help people see where the limits are before the market does.

Tuesday, January 6, 2026

Not all good ideas make good companies, and not all profitable businesses make good companies.

I've been thinking about this in the context of (a) my own company, Transpose Health, (b) the requirements for a venture-backable start-up, and (c) most recently a blog post by Kunle. 

  • My company (Transpose Health) is ultimately a services company, and my company can command contracts on the scale of $20-100K -- which seems great, if this is all I wanted to do. If I wanted to hire someone to run the day-to-day (say, $80K minimum with no benefits), I need to make sure I have at least a few contracts lined up -- a lot of work! Compare it to just working for an Epic consulting firm for a "safe" $175K/year and all the work starts to look less and less worth it. The only way this works (i.e. makes me a millionaire) is if there is scale: either an exponential ramp-up in the number of customers, or we expand to SaaS (which we are trying), or both. 
  • The same principle applies to start-ups that are venture backable. Start-ups can be solving interesting problems well -- but ultimately the only way to have venture backing make sense is the ability to scale up. 
  • Kunle talks about using AI to build personal efficiency tools. The catch: it is almost impossible to build a company around helping individuals with small but powerful efficiency tools. (One example: (1) finding the correct payer entity given only a payer name and (2) building an app to aggregate to-dos across different apps into Apple Notes.) AI-powered coding can help lower the barrier to create these yourself, ... but ultimately, the economics makes fixing these papercut issues hard for any company to tackle. We might be left in a world where we need to build our own tools, the "last mile" of productivity apps.
It's still hard to wrap my head around it: it's not enough for a company to fix interesting problems or be profitable. There are other key ingredients a company needs to be a viable company (including scale).

The depressing corollary is that there are some tools that will never exist as start-ups, except by someone so passionate (and naive) that they build it despite the adverse economics. Perhaps AI will be a way to overcome this hurdle.


The private equity IRR debate

My LinkedIn news feed has filled up with "IRR is fake" (thanks to following folks like constant PE-skeptic Ludo Phalippou). Some recent highlights:

  • Yale claimed a suspiciously high 31%+ IRR since inception for its private equity allocation from 1972-2002. 
  • Another LinkedIn post that what matters is DPI, not IRR
Now: me thinking through this for myself ...

The math behind IRR
Ludo Phalippou kindly provides an online spreadsheet showing how the a long-term IRR can be virtually meaningless. His hypothetical example (Table 2 in the spreadsheet; I backfilled the unbolded IRRs) shows incredibly that the IRR stays virtually constant after 1999, regardless of positive or negative cash flows:

After seeing this the first time, I didn't believe it -- I figured the math was wrong, or the equations were wrong. Big cash flows up or down don't move the IRR. The reason is in the equation itself: 
    

Given a long enough time frame (i.e. large n), each additional year's cash flow approaches 0. Extending Ludo's math, we can calculate how much each cash flow term contributes if you use IRR = 36.3%:

Why the distinction between IRR and DPI matter now
  • Liquidity (i.e. VC/PE exits) have slowed down, meaning money is locked up longer in illiquid investments
  • "You can't eat IRR" -- universities have gotten more cash-needy with Trump's war on higher-level education (including federal research funding cuts and increased excise tax), pushing top universities to budget cuts, hiring freezes, and secondary sales of private equities

A simple example
The textbook case for IRR goes something like: we invest $100 in year 1 in a new facility, which generates $55 in year 2 and $60 in year 3. The IRR here is straightforwardly calculated -- 9.7% -- and can be compared against other projects (e.g. investing in a different facility, or just investing the cash into a high-yield savings account). This use for IRR seems widely accepted: at a 9.7% return, the NPV of the cash flows is $0. 

IRR does poorly with large IRR values
Take a simple fund: $100 investment in year 1, with $150 and $50 payouts in years 2 and 3:

This IRR feels high -- and when we go back to check, if we invested $100 in year 1 and got a 78% return for 2 years, we'd end up with $317 (or $117 more than the $200 we get in the above example!) What gives? IRR assumes you can re-invest the $150 cash flow from year 2 at a 78% return for 1 year -- which would be very, very difficult to do. 

When the IRR is closer to the cost of capital -- say, 9.7% from the above example -- this issue gets swept under the rug, and "IRR" is a reasonable proxy for "return". But when the IRR is astronomical -- say, 78% or even 31% for Yale's PE portfolio -- IRR becomes detached from return, and thus can be misleading.

IRR does poorly with long time horizons (and can be driven by early returns)
See: the Yale example above.

My 2 cents
There seem to be 2 big things in private equity valuations: (1) NAV valuations and (2) exits (i.e. distributions for the LP). Perhaps NAV and distributions should be measured separately, so each fund has (at least) 2 metrics. NAV (reported in annual growth) can help give an idea of how well the portfolio is progressing, based on how other VCs value the start-ups. Distributions (measured by DPI, or maybe even IRR) would be a separate metric. 

A hypothetical example of what a cash-flow-only IRR might look like:


An expert re-weighs in
I wanted to check what the CFA Institute had to say about IRR corrections (e.g. MIRR in Excel) and once again ran into Ludo Phalippou. I also checked out the GIPS standards.

The takeaways from the resources:
  • "IRR should not be misconstrued as equivalent to a rate of return."
  • The IRR formula is simple, so it (a) hides assumptions like reinvestment rate and (b) can break down if the number of years gets too long
  • Ludo suggests using a NAV-to-NAV IRR
  • GIPS suggests using a time-weighted return (TWR) for evergreen funds
Perhaps next up in my meanderings: the private equity measurement alternatives (like PME/ KS-PME).

Friday, January 2, 2026

Reading round-up (Jan 2, 2026)

I keep coming across interesting articles, and I keep emailing them to myself to read later. I haven't found a tool I like that allows me to save these articles and jot down notes. My hope for the new year is that posts here will both unburden my inbox and force me to read/write/think a bit more. Here goes ...


I'm still trying to wrap my head around the details of the math in this, but this paper seems awesome at first glance -- a model for VC returns that is intuitive and replicable. The general ideas and my takeaways:
  • Each round of a VC investment is just an option on the company
  • A start-up's full life cycle can be modeled as a compound option. After the initial investment: do we continue with a follow-on or liquidate? And if we continue: if the start-up is valuable, IPO; if not, do a M&A (at a discount). 
  • In the past few decades, "VC" really means tech, so we can model VC exposure as a levered NASDAQ-100 bet, with ~2.2x beta in the first two years of the start-up, ~1.6x beta in the next 2 years, and a 1.4x beta thereafter.
  • If you use the levered NASDAQ model to replicate VC cash flows, venture funds in aggregate have underperformed this since 2000
    • It's not clear if this is because the NASDAQ has consistently overperformed, or if VC funds have underperformed, or if the start-ups' beta has decreased since 2000, or something else entirely
These results seem simple and interesting enough to try to replicate -- I hope to dig in more. (For example: I'm not sure how much this would differ from, say, a simple 2x levered NASDAQ long with a "pacing plan" for contributions and withdrawals.) I think it would ultimately be cool to be able to take this same model and reverse it: given the valuation of the NASDAQ, would this options-based VC model say it's a poor time to allocate to tech VC? Of course, all models are wrong, but it'd still be interesting to see how the numbers play out. I'd also love to see if this model could break out other types of VC funds (e.g. biotech) with the same results. It follows one of the trends that seems to be playing out: the lines between asset classes are blurring, and so is it possible to get a similar exposure to biotech VC by just using a levered, paced biotech index?

ECON 252 - Financial Markets - Lecture 6: David Swensen (Feb 2, 2011)

Old but good, almost a brief restatement of Swensen's big ideas. (1) An allocator's tools are asset allocation, market timing, and security selection; asset allocation is far and away the most important. (2) Swensen uses the gap between top and bottom quartile managers as a proxy for market efficiency (although to me, it seems like a poor proxy -- more below*). (3) Asset allocation should be at one extreme or the other -- very active (a la Pioneering Portfolio Management) or largely passive (a la Unconventional Success). (4) Measures of "risk" are still insufficient in the funds management world.

*High variance in funds alone seem like a poor measurement: (a) venture and private equity firms typically hold fewer assets (higher idiosyncratic risk) and (b) it's cheaper and easier to spin up a venture capital fund, meaning more funds that shouldn't exist are allowed to. 

  • For (a): a good experiment could be to compare private equity firms' variance to 10-stock portfolios (equal-weighted) with valuation marked quarterly -- would these portfolios show large interquartile variation? 
  • For (b): this is harder to account for -- perhaps re-do the measurement only with funds that are not the VC's first fund, to try to weed out the shooting stars? (This obviously introduces a survival bias, though ...)
I found this paper in the book Abundance (2025) by Ezra Klein and Derek Thompson. Background: to get grand funding, academic researchers can apply to the NIH (large but risk-averse, $28.4 billion budget in 2007) and Howard Hughes Medical Institute (HHMI, and more willing to bet on “people, not projects"). HHMI's mandate “urges its researchers to take risks, to explore unproven avenues, to embrace the unknown – even if it means uncertainty or the chance of failure"; they do this by offering more research freedom, quicker review times (6 weeks), a shorter application, better feedback, and more. They find that HHMI's program in aggregate rewards longer-term success and leads to more "breakthrough" innovation (e.g. top percentile papers). To be fair, the NIH is federally funded and so any "errant" research is bound to be lambasted in the halls of Congress as "frivolous, fraudulent research," a waste of taxpayer money ... so politics definitely drives the NIH's risk aversion.

The crossover from moonshot academic research (this paper) to moonshot start-ups (i.e. VC) is apt. Perhaps in this comparison, the small-business loans administration (SBA) is the NIH, and VCs as a whole are HHMI. My most immediate thought, though, is what VCs are willing to fund: the industry seems to coalesce around certain themes or trends, then fund them to death. (Today, it's "agentic AI.") Perhaps this truly is the next big thing, but these bets feel increasingly risk-averse -- more NIH in nature -- and perhaps less "innovative" for both start-up and VC. (VC is tough, though: early consensus is great, building consensus is good, but late consensus is bad!) My parting question: the VC industry has gotten so large that it begs the question, is VC consensus driving what founders are building, or are great founders driving what VCs invest in?


On the docket:
  • Dive into the numbers for Yale PE's returns, based on Ludo Phalippou's research and recent post

The new ESG: attention addiction?

Classic ESG focuses on companies who are polluting the environment (think industrials, coal plants, etc.), with the idea of either (a) filte...