Friday, October 31, 2025

Courting emerging VC managers ... with podcast ads?

I'm not sure if it's my natural disposition, desensitization from the current political regime, or advanced age, but I'm not that easy to surprise. I was listening to today's 20VC podcast episode, and like all popular podcasts, there's a few minutes of advertisements. 

To my surprise, the second ad was for Harvard Management Company, advertised as a "truly exceptional partner and savvy investors" who are looking to partner with both emerging and established venture investors -- so just reach out! It's the first time I've heard an ad from an LP targeted at GPs (but it's also very possible it's been a growing thing I'm ignorant of). 

On the one hand, maybe it's a stroke of marketing genius? As institutional investors of all sizes have crowded into venture, the first-movers (the Yales and Harvards) are having to fight their way into top VC fund allocation more and more. (See also: Yale's recent Prospect Fellowship.) Harvard is also a $56.7B endowment; whatever the ad costs is negligible, especially if it brings in a few investable leads. 

But, ... to me, it feels more like a debasement of the vaunted Harvard name. What does it say about HMC if they are resorting to the same marketing techniques as growth-stage start-ups? If more university endowments copy HMC's strategy here, am I going to start getting more ads from these LPs? (I imagine a dystopian world where my Netflix commercials alternate between drugs I don't need and LPs I won't invest with.) More worrying, perhaps it signals an upper limit on the sourcing strategy (or network effects of the Harvard name) of the current HMC venture team. 

Of course, my concerns could be entirely misplaced. It could just be a creative experiment, part of an early trend that has legs. I don't typically see things that surprise me -- especially not in the endowment space -- so this at the very least is an interesting new strategy to the age-old sourcing problem.

Thursday, October 30, 2025

Asking good questions and the riddle of the two guards

I often think about how to reframe complex things through different lenses (i.e. "If I were to explain this to my mother/father/wife, how might they really understand the fundamental idea?"). There's two ideas that have come together for me that I think capture commercial LLMs well:

  1. Generalist investors know a little (to a medium amount) about a lot of things, and so they rely on the "real" experts (i.e. SMEs, fund managers, etc.) to get comfortable with a company, investment strategy, etc. One of the most sought-after skills in these roles (and hardest to train) is the ability to ask good questions to get to make a good investment decision. I like to think of the experts as an oracle -- they have all the answers you're looking for, if only you ask the right questions. 
  2. There is a classic riddle where you have two guards, one who always lies and one who always tells the truth. They each guard a door, one that leads to freedom and one to certain death. You can ask them any question you want to figure out which guard is in front of which door.
I think a great way to think about commercial LLMs (like ChatGPT) is a combination of these two. They:
  1. Are trained on the entire internet, so are knowledgeable about almost any subject imaginable, but 
  2. You can never really tell if you're talking to the truth-telling guard (i.e. real expertise aggregated from reliable sources) or the liar guard (i.e. just an answer that sounds good).  
Perhaps not a novel insight, but for me, it's a good way to reframe what LLMs are good at. If you're looking to learn about a new area quickly (e.g. dive into the semiconductor industry, review a contract), then the point #1 shines, and LLMs are amazing. If, however, you need to base real decisions off it (i.e. make a semiconductor investment, send out a contract to a customer), then point #2 starts to rear its head. It still requires the discernment of a thoughtful human to ferret out what is the truth.

Monday, October 20, 2025

Current Investment Preferences

One big reason why I've chosen Blogger over more modern solutions (e.g. Substack, Medium) is that it relieves me of the mental burden of writing to an audience. I think it nudges the writing towards the LinkedIn style of writing: often overly brief, sometimes to the point of hollowness. 

This mental unburdening also allows me to post more journal-like posts, my current investment preferences. It feels like a nice succinct collection of everything I've learned and done investing-wise in the past two years. Hopefully it'll be nice to reflect on in years to come.


My current portfolio:
  • Passive ETFs (65%) – Prior to business school, I followed the wisdom of Jack Bogle and Charley Ellis. The appeal: low cost and low time commitment, perfect for someone who didn’t have time to research individual stocks.
  • Cash (money-market) (20%) – In the past couple years, I’ve built up a small cash cushion. The reason: (i) the market has performed well, so de-risk, (ii) I want to have liquidity to invest (e.g. in a house, in stocks) “when others are fearful.” 
  • Real estate (10%) – My wife and I maintain a cash-flow-positive rental with a low mortgage in a good location.
  • Deep conviction investments (5%) – This includes companies I think are valued/positioned well (e.g. GOOGL), a “fun” investment in a local climbing gym, and trading experiments (VIX put in April 2025, short-term quantum holds in 2024). 
My goals now:
  • Diversification – I’m nervous about my large US ETF exposure and am actively diversifying internationally. I’ve also considered (but not yet implemented) protective puts against the market for downside protection.
  • Increased risk – As I shift from grad student to salaried employee, I hope to shift my ETFs/cash buckets into “deep conviction” investments. 
  • Conviction – I aspire to invest with grounded, fundamentals-based conviction. I’m working on building software tools and frameworks to analyze stocks, industries, and start-ups in a way I can trust.
  • Practice – I’ve said no to angel investments in start-ups and a VC fund, and I hope to continue evaluating investments across asset classes.

Wednesday, October 1, 2025

AI adoption requires trust, usability, and a little discomfort

Challenge: great new technology -- but without any users

I spent 9 years on the frontlines of the leading healthcare software company in the US (Epic), and a big part of my role was helping hospital systems adopt our latest software features. We'd initially make new software updates settings you can turn on, but most hospital IT teams didn't bother to turn settings on, even if they were specifically requested by doctors!  

The reasons for slow adoption of new technology in hospitals:

1. "If it ain't broke, don't fix it" mentality. Healthcare is notoriously resistant to change, and healthcare IT is no exception. We'd get loud complaints about buttons moving around (when they hadn't) and things changing colors (when they hadn't). So, given the chance to turn on a huge new feature or leave it off -- the choice was pretty simple.

2. A few bad software updates spoil them all. Epic has had a few updates that have gone very poorly, breaking existing workflows -- and thus making clinical teams even more resistant to change. (Many hospitals would strategically try to upgrade a few months after the first people did, allowing other organizations to find the all the bugs.) For both the clinical and IT team, more bleeding-edge updates meant more pain, so being late adopters was beneficial!

3. Healthcare providers don't know which software updates are coming. Healthcare workers today have enough to worry about, and so you can't fault them for not knowing what exactly is in each new Epic release. Combined with the above two points, there was sometimes a "you can't miss what you don't know" or a "we're pretty busy right now, we'll do it later" mentality. 

Solving the problem of new software adoption

This was a wacky and frustrating phenomenon for company leadership -- our software developers were pumping out new code, but few customers were using it! We could try to blame the hospital's own IT teams, but ultimately, if doctors were complaining that "Epic can't do X," it makes Epic look bad. As the long-term technical support, we had a few solutions:

1. Mandated (and completely free) onsite trips. We were allowed to go onsite to each hospital system once a year, completely paid for by our company. The goals: get to know clinical workflows better, but more importantly, build trust with providers and leadership.

2. Operational outreach calls. We pushed our hospital IT teams to set up operational calls, allowing us to check in directly with hospital leadership. The goal: allow us to hear leadership's top concerns, letting us educate them on new features that addressed those pain points. (As a side note: I imagine many other SaaS companies would use this time to upsell. However, customers paid a flat fee for our support and the entire suite of software, and so these calls were freed to be strictly about solving problems -- and not about misaligned financial incentives.) 

3. Shift to making features "on by default." As a company, we realized that making someone turn a feature off was far more powerful than allowing them to turn it on, so we'd ship new features as "on by default." (It's reminiscent of Richard Thaler's "nudges": the default option should be the optimal choice for most people.) This had a two side effects: (1) it made more customers use the updates on day 1, but (2) it also forced our software developers to ensure the features were production-ready on day 1. It also undoubtedly means more discomfort for everyone in the short-term -- hospital providers, hospital IT, Epic support teams -- but long-term, means the software stays up-to-date.

In short, we served as the "glue" between operational leadership, the hospital's IT team, and our own software developers. I'm undoubtedly biased, but I think this combination is required for long-term software excellence.

Tech (and AI) adoption requires trust, usability, and discomfort

In the past two years, I've started my own healthcare data small business (Transpose Health), have fallen for another change-averse industry while at business school (investment management), and have witnessed the omnipotent specter of AI. There's a ton of similarities between the healthcare and investing worlds -- e.g. heavy regulation and change-averse users who've grown up practicing a particular way. The role of AI in investment management mirrors what I've spent the formative years of my career doing. What AI adopters need:

1. Trust. Not surprising: people want software they can trust. This can't be overemphasized, especially in industries where accuracy is key (e.g. healthcare, finance). In my view, this is why AI adoption has been so slow -- people can't find a way to trust it. Better to do something slow, steadily, and accurately ("the old way") than quickly but error-prone ("the AI way"). Solving for trust is the crux of AI adoption. 

2. Usability. A cliche, but technology needs to solve real-world pain points. If the software isn't fixing core needs (i.e. poor product-market fit), or the software isn't usable (i.e. poor user interface), or the software isn't baked into users' workflows (i.e. poor adoption), it will fail.

3. Discomfort. Any team adopting new technology will need to have some tolerance for growing pains -- adopting new technology is seldom easy. What this might look like: forcing people to use a new software so that it is able to iron out the kinks. 

Great AI will be 90% existing software, 10% AI

Great technology users will look a lot like great technology users of the past -- laser-focused on trust and usability. This won't change with AI -- the bulk of technological change is change management, good governance, "traditional" software (e.g. non-AI algorithms, clean databases, cloud infrastructure), enhanced with LLM "magic." 

I've spent time as a software developer, entrepreneur, technology support, and most recently in various investing roles. The way I view AI depends on the role I'm playing:

  • Software developer lens: LLMs are fantastic at writing decent code and giving ideas on how to architect ideas. The focus will continue to be on automation (with or without AI) and building tools you can trust.
  • Entrepreneur: For me, LLMs have reduced operating expenses by allowing me to do most of the development myself (with assistance from LLMs).
  • Technology support: AI (and technology in general) should free you up to focus on the things that move the needle: understanding workflows and building human-to-human trust
  • Venture investor: From the perspective above: pure play AI companies (e.g. OpenAI) require a ton of capital, and the use cases are a bit nebulous. These are perhaps the true "moonshot" venture ideas, but I have a bias towards start-ups whose goal is to build user trust and adoption.
  • Investment office operations: LLMs are inherently probabilistic, so they're the wrong tool for many jobs, especially quantitative ones. Calculators (or Excel, or good non-AI APIs) should be used for calculations. LLMs will have a home in the investment office; I think great AI-infused use cases are data entry assistance (e.g. Tamarix), quickly learning a new subsector or investor, and "devil's advocate" roles while writing investment memos. 

Yale Cooling Conference

 Notes from Yale Cooling Conference. Overall -- seems like a cool market I've never heard about, with a strong environmental case but st...