Discovered: Dec 20, 2023 08:48 How to Analyze Large Text Datasets with LangChain and Python <– I’d like to do this with an open source version of LangChain –> QUOTE: I added time.sleep(20) as comments, since it’s possible that you’ll hit rate limits when working with large texts, most likely if you have the free tier of the OpenAI API. Since I think it’s handy to know how many tokens and credits you’re using with your requests so as not to accidentally drain your account, I also used with get_openai_callback() as cb: to see how many tokens and credits are used for each chapter.

Leave a comment on github