The most recent Yale Alumni Magazine's focus was AI, from the student's and professor's point of view. The overarching questions still have no clear answers: how much should I use AI? And what should I use it for? The general take that I agreed with: the point of coursework (like papers) is not the paper itself, but the thinking that goes into it -- the paper is just a tool to get you to concentrate your effort! However, for other real-world use cases, the artifact is the end goal. For example, writing federal grants, generating insurance appeals, summarizing lengthy documents --
My current analogies for LLMs:
Excellent intern:
If you give an LLM a discrete task with clear instructions, it will mostly do a great job (e.g. write code to do a specific task, generate a haiku on X topic, etc.). However, just like an intern, the LLM has little context for anything, so you often have to be super detailed. (It reminds me of a 2nd grade writing assignment of "write instructions on making a PB&J" -- surprisingly hard! And then scale that up to a mildly hard task ...) If you think college coursework as a way to just complete assignments, then LLMs-as-an-intern look like an excellent fit.
Wizard of Oz:
LLMs possess knowledge of seemingly everything ... but we all kind of know it's a facade by now ... but it also has a lot of knowledge! I've called this the riddle of the two guards before, but the idea is the same: you can ask any question, but you're not 100% sure if it's telling the truth or making something up. Nevertheless, if you need to get up to speed on a topic quickly (and can be 80% correct about it), or have weird follow-up questions on the info it returns to you, or need someone to help re-explain topics with you (e.g. I "chatted" with it to get a good conceptual understanding of diluted EPS calculations), or if you need someone to come up with counterfactuals, etc. etc. -- there's really no better, cheaper option than LLMs today. But -- you still need to be aware that the Wizard doesn't actually exist.
Not all-knowing:
The LLM-as-as-intern and LLM-as-a-wizard seem like it'd cover 95% of use cases ... but work in the real world, and there's a gigantic chasm between the two. The biggest downfalls: (1) LLMs do not have access to all the information and (2) humans are complex. In software land, once you get from writing code to architecture, decisions are a series of trade-offs. For example, if we choose option A, we'll have to do more maintenance ourselves, but have lower expenses and be less locked into a specific provider; option B, the opposite. In venture land, people pick up on small things -- like whether you feel like you can trust the person, how they answered a particular question, whether their hearts are truly in the start-up -- that live solely in your head. In personal life, LLMs should not intermediate conversations between me and my wife; it lacks years and years of historical interactions (both between each other and other people) that, again, reside only in our heads.
Now, the takeaway might just be it's a solvable data collection problem (e.g. all your thoughts can be downloaded with Neurolink to a central repository) and that LLMs/AI will dominate over humanity ... but I find it increasingly hard to believe. In my experience, people like feeling useful, and people like interacting with other people in person (and need it! see: the pandemic). But, LLMs (and more broadly AI) are excellent -- both higher quality and cheaper -- at the things listed above, but they have their limits. The hard part now, I think, is to help people see where the limits are before the market does.
No comments:
Post a Comment