1. 5

    1. 2

      full paper

      Abstract: Large Language Models (LLMs) based chatbots such as OpenAI’s ChatGPT 4.0 and Google’s Bard are emblematic of the broader generative artificial intelligence advances. They have garnered substantial attention in both academic and public discourse. This paper aims to examine the intersection of LLMs chatbots with quantum science and technology, focusing on their potential to empower research methodologies and pedagogical approaches within these disciplines. It explores with many examples the capabilities of LLM-based tools by assessing their existing and potential future utility in various academic functions. These range from facilitating basic question-and-answer interactions to more complex activities such as software development, scientific paper writing, paper reviewing, experiments preparation, ideation and fostering collaborative research practices in quantum science. The rapid evolution of LLMs and other related tools implementing various forms of reasoning suggests they have the potential to significantly alter the research and educational landscapes, similarly to the transformative impact of the Internet and its associated technological advancements. Accordingly, this paper suggests the creation of a quantum science domain specific LLM based chatbot using open source models. It also contextualizes LLM-based chatbots within the broader spectrum of machine learning technologies which are already used in the advancement of quantum science and technology. It then quickly explores how quantum computing might or might not further advance machine learning applications and languagebased models. The conclusion is that AI may have a profound impact in shaping the trajectory of quantum science research, education, and technology developments while the other way around is quite uncertain, at least in the short to mid-term.

    2. 1

      This is written closer to a blog post than a formal scientific paper/review.

      A little nitpick, is the description of capability 13 on page 13, the llm’s aren’t “using” generative ai and then “not using it” as far as I understand it, the output is just “generative ai”, it’s wrong to think of the llm’s as actively doing anything other than generating something in response to the prompt. So a little bit of confusion as to what is going on under the hood, but hey, that’s why I’m here.

      TLDR; it’s a summary/long blog post of this one guy’s exploration into LLM’s with a nominal tip of the hat to quantum computing since a fair amount of his questions are just basic asking the LLM questions and comparing the answers to what they expected. Unsurprising given the hype of LLM’s he comes to the conclusion that yes, in fact, LLM’s can do a lot of things, substitution of concepts of quantum computing into prompts works as well as it does (roughly) for substituting any other field, “nuclear powerplants”, “typescript”, etc..

      1. 3

        xavaav 12 days ago | link |

        Thanks for the feedback. You’re right that page 13, I used the term LLM instead of “LLM-based chatbot” like in most other places in the document and I corrected it right away. I make it clear from page 2 to 7 that these terms must not be confused and describe a lot of what’s happening “under the hood” like you suggest. LLMs like GPT 4.0 are indeed text gen-AI while chatbots using, like ChatGPT 4.0, can or may integrate various other tools like chain of thoughts, knowledge graphs, RAGs, and agents, on top of accessing third party tools like Wolfram GPT since ChatGPT4.0 enables the integration of plugins. This must not be confused with all sorts of fine tuning approaches, including reinforcement learning with human feedback, which are “just” modifying the internal structure of LLMs.

        What I pinpoint in page 13 is the fact that when asking for a chart, ChatGPT 4.0 asks DALL-E to build it (with approximate results) when it could maybe also try to find some existing chart using more classical research tools (which it does for some regular text prompts, using the Bing search engine). I also wonder how these chart generation tools could be expanded expanded with the integration of some adversarial features, particularly for their text components (ala GANs).

        You are right that exploring the capabilities can be applied in many science or other fields like it is currently done in healthcare, chemistry, legal work. I found it interesting to look at these tools capabilities in quantum science and explore their potential which seems interesting and may both add a lot of value and have some negative side effects to mitigate. I had not found any paper on this yet. My paper may look like a blog post but it was too long for such a format, particularly considering the number of bibliographical references I am using.

        We must indeed be careful about the LLM hype and understand their real capabilities, how it evolves, how hybridized LLM-based chatbots will fix some of their flaws and improve them, or whether you’ve got to wait for entirely different designs like what Yann LeCun is working on at Meta, and that I mention in the paper.

    3. 1
      1. 1

        @Hoss I think your comment is empty? I cannot see it at least