Body
Table of Contents
This article provides a high-level comparison of leading language models currently available within the CometAI platform. The models, developed by Anthropic, OpenAI, Mistral, and Meta, are evaluated across performance strengths, cost-efficiency, and best-fit use cases for academic and enterprise environments. This guide is designed to help faculty and staff select the most appropriate model for tasks such as summarization, writing, planning, coding, and visual document interpretation.
Anthropic:
Meta:
Mistral:
OpenAI:
Note: This model list and associated details are current as of August 2025. New models like Grok, GPT 5 may be released, and this guide will be updated accordingly.
This section shows how to control which models appear in your chat dropdown.
1.Open the Settings tab by clicking the gear icon(⚙️) in the left-hand menu (Figure 1 Step 1).
2.Select “Settings” from the menu to open the Settings window (Figure 1 Step 2).

Figure 1: Screenshot of the left sidebar showing the ⚙️ Settings tab (marked with ①) and the Settings option within the menu (marked with ②), along with other options including Manage Accounts, Import Conversations, Export Conversations, and Send Feedback.
3.In the popup, stay on the default Configurations tab (Figure 2).

Figure 2: Screenshot of the Settings popup window with the Configurations tab highlighted by a red box.
4.Under Models, you will see all available model providers and their respective models.
- You can enable or disable models to show or hide them from your chat dropdown.
- The “Include All” column allows you to toggle all models from a given provider (Figure 3).

Figure 3: Snippet from the Settings menu showing how clicking a provider's checkbox toggles the visibility of all associated models in the dropdown list, with red arrows and dots illustrating the enabled (top) and disabled (bottom) states for the Mistral provider.
- Individual model names can also be clicked to enable/disable them (Figure 4).

Figure 4: Snippet from the Settings menu showing how directly clicking a model toggles its visibility in the dropdown list, with red arrows and dots illustrating the enabled (top) and disabled (bottom) states for the Mistral 7B model.
5.Models shown in blue are currently active and will appear in your dropdown.
- When deselected, they appear white (Theme: dark mode) (see Figure 4 Mistral for example) or gray (Theme: light mode).
6.Important: The default model for your workspace will always remain active, even if all other models from that provider are disabled.
Asterisk Reminder: If you change any configuration, an asterisk (*) will appear next to the Configurations tab to indicate unsaved changes (Figure 5). To apply your changes, click Save in the bottom-right corner of the window.

Figure 5: Closeup of the Configurations tab with an asterisk (*) indicating that changes have been made but not yet saved.
To check the pricing details of each model directly in CometAI:
1.Open the Settings tab by clicking the gear icon (⚙️) in the left-hand menu (Figure 6 Step 1).
2.Select “Settings” from the menu to open the Settings window (Figure 6 Step 2).

Figure 6: Screenshot of the left sidebar showing the ⚙️ Settings tab (marked with ①) and the Settings option within the menu (marked with ②), along with other options including Manage Accounts, Import Conversations, Export Conversations, and Send Feedback.
3.In the Settings popup, select the “Model Pricing” tab (Figure 7).

Figure 7: Screenshot of the Settings popup window with the Model Pricing tab highlighted by a red box, displaying token pricing tables for OpenAI, Claude, and Mistral models across input, output, and cached tokens.
4.You will see a table listing each available model along with pricing details in Tokens per Million. The table includes:
- Model name (Figure 8 No. 1)
- Input Tokens (cost for text sent to the model) (Figure 8 No. 2)
- Output Tokens (cost for text generated by the model) (Figure 8 No. 3)
- Cached Tokens (lower-cost responses from prior requests) (Figure 8 No. 4)

Figure 8: Closeup of the Model Pricing table with column headers ① Model, ② Input Tokens, ③ Output Tokens, and ④ Cached Tokens highlighted by a red box and numbered for reference
What are Cached Tokens?
Cached Tokens refer to results that are served from the platform’s memory (or “read cache”) when the same input was recently submitted. This reduces compute cost and improves response time for repeated prompts.
Important Notes
Token Billing: At this time, user accounts are billed exclusively for input and output tokens. While cached token pricing is displayed in our model pricing guide, cached token costs are currently provided for informational purposes only.
For optimal cost management, begin with our lower-cost models, which often deliver excellent results for many use cases. Should you find the output quality insufficient for your requirements, you can progressively move to more advanced (and more expensive) models until you achieve the desired performance level.
- When starting a new conversation, a default model is already selected. To choose a different model, click on the model name to open the selection dropdown (Figure 9).
Figure 9: Screenshot of the main chat window. The Model Selection dropdown menu is open, and the selected model is highlighted by a red box.
2. Each model in the dropdown has three icons to the right:
- Supports / Does Not Support Images in Prompts (Shutter/Shutter Disabled images)
- Pricing Indicators (Dollar Sign):
- Green Dollar Sign = Inexpensive
- Yellow Dollar Sign = Moderate Cost
- Red Dollar Sign = Expensive
- Context Window (kebab icon):
- ≣ Large Output Token Limit (≥ 100,000 tokens)
- ≡ Average Output Token Limit (4,096 – 100,000 tokens)
- = Less than Average Output Token Limit (< 4,096 tokens)
Note: Users can hover over the (i) icon above the model selection dropdown to see a Legend popup menu. The Legend displays explanations for three icon categories: image support, pricing tiers, and context window limits (figure 10).

Figure 10: Screenshot of the Legend popup menu that appears when hovering over the (i) icon in the model selection dropdown, displaying explanations for three icon categories: image support, pricing tiers, and context window limits.
- Users can hover over any model name in the dropdown to view a detailed description, including recommended use cases, supported input types, and training data cut-off (figure 11).

Figure 11: Screenshot of the model selection dropdown with the user hovering over a model name, triggering a tooltip that provides a brief description of the model’s features and intended use cases.
If you have more questions about the different Language Models, then please review the CometAI - Model Comparison and Analysis article for further information.