Challenge: great new technology -- but without any users
I spent 9 years on the frontlines of the leading healthcare software company in the US (Epic), and a big part of my role was helping hospital systems adopt our latest software features. We'd initially make new software updates settings you can turn on, but most hospital IT teams didn't bother to turn settings on, even if they were specifically requested by doctors!
The reasons for slow adoption of new technology in hospitals:
1. "If it ain't broke, don't fix it" mentality. Healthcare is notoriously resistant to change, and healthcare IT is no exception. We'd get loud complaints about buttons moving around (when they hadn't) and things changing colors (when they hadn't). So, given the chance to turn on a huge new feature or leave it off -- the choice was pretty simple.
2. A few bad software updates spoil them all. Epic has had a few updates that have gone very poorly, breaking existing workflows -- and thus making clinical teams even more resistant to change. (Many hospitals would strategically try to upgrade a few months after the first people did, allowing other organizations to find the all the bugs.) For both the clinical and IT team, more bleeding-edge updates meant more pain, so being late adopters was beneficial!
3. Healthcare providers don't know which software updates are coming. Healthcare workers today have enough to worry about, and so you can't fault them for not knowing what exactly is in each new Epic release. Combined with the above two points, there was sometimes a "you can't miss what you don't know" or a "we're pretty busy right now, we'll do it later" mentality.
Solving the problem of new software adoption
This was a wacky and frustrating phenomenon for company leadership -- our software developers were pumping out new code, but few customers were using it! We could try to blame the hospital's own IT teams, but ultimately, if doctors were complaining that "Epic can't do X," it makes Epic look bad. As the long-term technical support, we had a few solutions:
1. Mandated (and completely free) onsite trips. We were allowed to go onsite to each hospital system once a year, completely paid for by our company. The goals: get to know clinical workflows better, but more importantly, build trust with providers and leadership.
2. Operational outreach calls. We pushed our hospital IT teams to set up operational calls, allowing us to check in directly with hospital leadership. The goal: allow us to hear leadership's top concerns, letting us educate them on new features that addressed those pain points. (As a side note: I imagine many other SaaS companies would use this time to upsell. However, customers paid a flat fee for our support and the entire suite of software, and so these calls were freed to be strictly about solving problems -- and not about misaligned financial incentives.)
3. Shift to making features "on by default." As a company, we realized that making someone turn a feature off was far more powerful than allowing them to turn it on, so we'd ship new features as "on by default." (It's reminiscent of Richard Thaler's "nudges": the default option should be the optimal choice for most people.) This had a two side effects: (1) it made more customers use the updates on day 1, but (2) it also forced our software developers to ensure the features were production-ready on day 1. It also undoubtedly means more discomfort for everyone in the short-term -- hospital providers, hospital IT, Epic support teams -- but long-term, means the software stays up-to-date.
In short, we served as the "glue" between operational leadership, the hospital's IT team, and our own software developers. I'm undoubtedly biased, but I think this combination is required for long-term software excellence.
Tech (and AI) adoption requires trust, usability, and discomfort
In the past two years, I've started my own healthcare data small business (Transpose Health), have fallen for another change-averse industry while at business school (investment management), and have witnessed the omnipotent specter of AI. There's a ton of similarities between the healthcare and investing worlds -- e.g. heavy regulation and change-averse users who've grown up practicing a particular way. The role of AI in investment management mirrors what I've spent the formative years of my career doing. What AI adopters need:
1. Trust. Not surprising: people want software they can trust. This can't be overemphasized, especially in industries where accuracy is key (e.g. healthcare, finance). In my view, this is why AI adoption has been so slow -- people can't find a way to trust it. Better to do something slow, steadily, and accurately ("the old way") than quickly but error-prone ("the AI way"). Solving for trust is the crux of AI adoption.
2. Usability. A cliche, but technology needs to solve real-world pain points. If the software isn't fixing core needs (i.e. poor product-market fit), or the software isn't usable (i.e. poor user interface), or the software isn't baked into users' workflows (i.e. poor adoption), it will fail.
3. Discomfort. Any team adopting new technology will need to have some tolerance for growing pains -- adopting new technology is seldom easy. What this might look like: forcing people to use a new software so that it is able to iron out the kinks.
Great AI will be 90% existing software, 10% AI
Great technology users will look a lot like great technology users of the past -- laser-focused on trust and usability. This won't change with AI -- the bulk of technological change is change management, good governance, "traditional" software (e.g. non-AI algorithms, clean databases, cloud infrastructure), enhanced with LLM "magic."
I've spent time as a software developer, entrepreneur, technology support, and most recently in various investing roles. The way I view AI depends on the role I'm playing:
- Software developer lens: LLMs are fantastic at writing decent code and giving ideas on how to architect ideas. The focus will continue to be on automation (with or without AI) and building tools you can trust.
- Entrepreneur: For me, LLMs have reduced operating expenses by allowing me to do most of the development myself (with assistance from LLMs).
- Technology support: AI (and technology in general) should free you up to focus on the things that move the needle: understanding workflows and building human-to-human trust
- Venture investor: From the perspective above: pure play AI companies (e.g. OpenAI) require a ton of capital, and the use cases are a bit nebulous. These are perhaps the true "moonshot" venture ideas, but I have a bias towards start-ups whose goal is to build user trust and adoption.
- Investment office operations: LLMs are inherently probabilistic, so they're the wrong tool for many jobs, especially quantitative ones. Calculators (or Excel, or good non-AI APIs) should be used for calculations. LLMs will have a home in the investment office; I think great AI-infused use cases are data entry assistance (e.g. Tamarix), quickly learning a new subsector or investor, and "devil's advocate" roles while writing investment memos.