Models in CometAI Platform

Summary

This document provides a high-level comparison of leading language models currently available within the CometAI platform. The models, developed by Anthropic, OpenAI, Mistral, and Meta, are evaluated across performance strengths, cost-efficiency, and best-fit use cases for academic and enterprise environments. This guide is designed to help faculty and staff select the most appropriate model for tasks such as summarization, writing, planning, coding, and visual document interpretation.

Body

Table of Contents 

This article provides a high-level comparison of leading language models currently available within the CometAI platform. The models, developed by  Anthropic, OpenAI, Mistral, Meta, Gemini, Grok, & NVIDIA are evaluated across performance strengths, cost-efficiency, and best-fit use cases for academic and enterprise environments. This guide is designed to help faculty and staff select the most appropriate model for tasks such as summarization, writing, planning, coding, and visual document interpretation. 

Current List of Models in CometAI 

Anthropic

Google

Meta

Mistral 

NVIDIA

  • UT Aspire ‑ Nemotron 49B

OpenAI

xAI (Grok)

Note: This model list and associated details are current as of March 2026.  

How to Manage Model Visibility in CometAI 

This section shows how to control which models appear in your chat dropdown. 

  • Open the "Settings" tab by clicking the Gear icon in the left-hand menu.
  • Select Settings from the menu- a new window.Screenshot of the left sidebar showing the ⚙️ Settings tab (marked with ①) and the Settings option within the menu (marked with ②), along with other options including Manage Accounts, Import Conversations, Export Conversations, and Send Feedback. 
  • In the popup, stay on the default "Configurations" tab.

Screenshot of the Settings popup window with the Configurations tab highlighted by a red box. 

  • Under "Models", you will see all available model providers and their respective models. 
    • You can enable or disable models to show or hide them from your chat drop-down. 
    • The “_Include All_” column allows you to toggle all models from a given provider.

Snippet from the Settings menu showing how clicking a provider's checkbox toggles the visibility of all associated models in the dropdown list, with red arrows and dots illustrating the enabled (top) and disabled (bottom) states for the Google provider. 

  • Individual model names can also be clicked to enable/disable them. 

Snippet from the Settings menu showing how directly clicking a model toggles its visibility in the dropdown list, with red arrows and dots illustrating the enabled (top) and disabled (bottom) states for the Google Gemini 2.5-flash-lite. 

  • Models shown in blue are currently active and will appear in your dropdown. 
    • When deselected, they appear white or gray.
Important: The default model for your workspace will always remain active, even if all other models from that provider are disabled. 
Asterisk Reminder: If you change any configuration, an asterisk (*) will appear next to the Configurations tab to indicate unsaved changes (Figure 5). To apply your changes, click Save in the bottom-right corner of the window. Closeup of the Configurations tab with an asterisk (*) indicating that changes have been made but not yet saved. 

How to Check Model Pricing in CometAI 

To check the pricing details of each model directly in CometAI: 

  • Open the "Settings" tab by clicking the Gear icon in the left-hand menu.
  • Select Settings from the menu- a new window will open.

Screenshot of the left sidebar showing the ⚙️ Settings tab (marked with ①) and the Settings option within the menu (marked with ②), along with other options including Manage Accounts, Import Conversations, Export Conversations, and Send Feedback. 

  • In the Settings popup, select the Model Pricing tab.

Screenshot of the Settings popup window with the Model Pricing tab highlighted by a red box, displaying token pricing tables for OpenAI, models across input, output, and cached tokens. 

  • You will see a table listing each available model along with pricing details in Tokens per Million. The table includes: 
    1. Model name
    2. Input Tokens (cost for text sent to the model) 
    3. Output Tokens (cost for text generated by the model)
    4. Cached Tokens (lower-cost responses from prior requests)

Closeup of the Model Pricing table with column headers ① Model, ② Input Tokens, ③ Output Tokens, and ④ Cached Tokens highlighted by a red box and numbered for reference 

What are Cached Tokens? 

Cached Tokens refer to results that are served from the platform’s memory (or “read cache”) when the same input was recently submitted. This reduces compute cost and improves response time for repeated prompts. 

Token Billing: At this time, user accounts are billed exclusively for input and output tokens. While cached token pricing is displayed in our model pricing guide, cached token costs are currently provided for informational purposes only. 

Selection Strategy 

For optimal cost management, begin with our lower-cost models, which often deliver excellent results for many use cases. Should you find the output quality insufficient for your requirements, you can progressively move to more advanced (and more expensive) models until you achieve the desired performance level. 

  • When starting a new conversation, a default model is already selected. To choose a different model, click on the model name to open the selection drop-down.

Screenshot of the main chat window. The Model Selection dropdown menu is open, and the selected model is highlighted by a red box.  

  • Each model in the drop-down has three icons to the right:  
    • Supports / Does Not Support Images in Prompts (Shutter/Shutter Disabled images) 
    • Pricing Indicators (Dollar Sign): 
      • Green Dollar Sign = Inexpensive 
      • Yellow Dollar Sign = Moderate Cost 
      • Red Dollar Sign = Expensive 
    • Context Window (kebab icon): 
      • ≣ Large Output Token Limit (≥ 100,000 tokens) 
      • ≡ Average Output Token Limit (4,096 – 100,000 tokens) 
      • = Less than Average Output Token Limit (< 4,096 tokens) Screenshot of the Legend popup menu that appears when hovering over the (i) icon in the model selection dropdown, displaying explanations for three icon categories: image support, pricing tiers, and context window limits. 
Note: Users can hover over the (i) icon above the model selection drop-down to see a Legend pop-up menu. The Legend displays explanations for three icon categories: image support, pricing tiers, and context window limits.
  • Users can hover over any model name in the drop-down to view a detailed description, including recommended use cases, supported input types, and training data cut-off.

Screenshot of the model selection dropdown with the user hovering over a model name, triggering a tooltip that provides a brief description of the model’s features and intended use cases. 

Have questions about the different Language Models? Please review the CometAI - Model Comparison and Analysis article for further information. 

Details

Details

Article ID: 1453
Created
Mon 8/11/25 8:57 AM
Modified
Fri 4/24/26 5:06 PM