Table of Contents
This article provides a high-level comparison of leading language models currently available within the CometAI platform. The models, developed by Anthropic, OpenAI, Mistral, Meta, Gemini, Grok, & NVIDIA are evaluated across performance strengths, cost-efficiency, and best-fit use cases for academic and enterprise environments. This guide is designed to help faculty and staff select the most appropriate model for tasks such as summarization, writing, planning, coding, and visual document interpretation.
Anthropic
Google
Meta
Mistral
NVIDIA
OpenAI
xAI (Grok)
Note: This model list and associated details are current as of March 2026.
This section shows how to control which models appear in your chat dropdown.
- Open the "Settings" tab by clicking the Gear icon in the left-hand menu.
- Select Settings from the menu- a new window.

- In the popup, stay on the default "Configurations" tab.

- Under "Models", you will see all available model providers and their respective models.
- You can enable or disable models to show or hide them from your chat drop-down.
- The “_Include All_” column allows you to toggle all models from a given provider.


- Models shown in blue are currently active and will appear in your dropdown.
- When deselected, they appear white or gray.
Important: The default model for your workspace will always remain active, even if all other models from that provider are disabled.
Asterisk Reminder: If you change any configuration, an asterisk (*) will appear next to the Configurations tab to indicate unsaved changes (Figure 5). To apply your changes, click Save in the bottom-right corner of the window.

To check the pricing details of each model directly in CometAI:
- Open the "Settings" tab by clicking the Gear icon in the left-hand menu.
- Select Settings from the menu- a new window will open.

- In the Settings popup, select the Model Pricing tab.

- You will see a table listing each available model along with pricing details in Tokens per Million. The table includes:
- Model name
- Input Tokens (cost for text sent to the model)
- Output Tokens (cost for text generated by the model)
- Cached Tokens (lower-cost responses from prior requests)

Cached Tokens refer to results that are served from the platform’s memory (or “read cache”) when the same input was recently submitted. This reduces compute cost and improves response time for repeated prompts.
Token Billing: At this time, user accounts are billed exclusively for input and output tokens. While cached token pricing is displayed in our model pricing guide, cached token costs are currently provided for informational purposes only.
Selection Strategy
For optimal cost management, begin with our lower-cost models, which often deliver excellent results for many use cases. Should you find the output quality insufficient for your requirements, you can progressively move to more advanced (and more expensive) models until you achieve the desired performance level.

- Each model in the drop-down has three icons to the right:
- Supports / Does Not Support Images in Prompts (Shutter/Shutter Disabled images)
- Pricing Indicators (Dollar Sign):
- Green Dollar Sign = Inexpensive
- Yellow Dollar Sign = Moderate Cost
- Red Dollar Sign = Expensive
- Context Window (kebab icon):
- ≣ Large Output Token Limit (≥ 100,000 tokens)
- ≡ Average Output Token Limit (4,096 – 100,000 tokens)
- = Less than Average Output Token Limit (< 4,096 tokens)

Note: Users can hover over the (i) icon above the model selection drop-down to see a Legend pop-up menu. The Legend displays explanations for three icon categories: image support, pricing tiers, and context window limits.
-
Users can hover over any model name in the drop-down to view a detailed description, including recommended use cases, supported input types, and training data cut-off.
