In this talk, I will describe the new exciting possibilities enabled by large language models (LLMs). I will start by giving a high-level overview of approaches in AI with a bit of history. Then I will put LLMs into this context and show a few examples of their common-sense reasoning abilities. Next, I will argue why imprecise/continuous reasoning is important for reasoning about discrete structures (such as mathematical objects) and why we should try to extend them to do a more consistent and more complex form of reasoning. I will finish with a short sketch of how this could be done.