A.I. Chat for Professionals: Documentation

Encrypted and secure, no rate limits, with earliest and validated access to the latest tech powered by OpenAI. A streamlined user interface, a host of productivity features, and unlimited access to the incredible new GPT4-Turbo with an unprecedentedly large input limit.

  • Analysis Chat. Upload documents (like PDFs, spreadsheets, Word / text files, etc) or image files and use them as data or context for your Chat.  We use advanced document comprehension and data extraction methods to optimize your uploaded files for high-fidelity recall by the A.I.
  • Chat Folders. Manage your Chat Histories using Chat Folders.
  • Quick Prompts. A well-crafted prompt can help get the best results from an A.I. system. If you create some great prompts that you keep coming back to, you can save them as Quick Prompts to quickly and easily insert them into your Chat. Tidy your Quick Prompts into Folders for extra organization, and mark your top go-to Prompts as Favourites.
  • Chat Forks. The “Create New Fork” button appears after each response message from the A.I. This option creates a copy of your Chat that contains all of the conversation history up to that point. This can be handy to diverge a conversation from any point in your back-and-forth with the A.I., enabling you to try different ways of interacting with the A.I. or to revisit past conversations.
  • Share a Chat, or an individual Message. Click the Share button at the top of the Chat window to create a sharing link to share a snapshot of your Chat. Click the Sharing button next to one of your Prompts or an A.I. Response within the Chat to just share that, individually.

Notes on Qoken Usage:

  • Example: engaging in a discussion about a pasted-in 500-word short story with five back-and-forths with the A.I., with Qoken usage depending on the chosen model:
    • Using GPT3.5-Turbo: Used ~0.12 Qokens (lower cost to use, plus tends to offer shorter/less detailed responses)
    • Using GPT4.0-Omni: Used ~0.32 Qokens (higher cost to use, plus tends to provide more detailed/lengthier responses)

A.I. Chat usage is influenced by several factors including the model used for the Chat, the amount of content in your Prompts and in the A.I.’s Response, and the overall length of the Chat. Longer Chats (those with lots of back-and-forths) accumulate Qoken usage faster, as the Chat History is brought forward in the background to provide continuous context within the Chat session. This is especially so while using the GPT4-Omni model; it has a very large “memory” (i.e., input limit/context window) which greatly improves its performance, but it also means that the longer the Chat, the greater the Qoken usage in the background to maintain its improved context.

You can reduce Qoken usage by starting new Chats when you no longer need the A.I. to remember earlier parts of the conversation, or by using the GPT3.5-Turbo model when you don’t need the extra performance.

Need help or have a suggestion?
Contact MQO A.I. Hub Support: